title stringlengths 15 163 | paper_decision stringclasses 4
values | review_1 stringlengths 853 32.6k | rebuttals_1 stringlengths 0 15.1k | review_2 stringlengths 1.03k 35.6k | rebuttals_2 stringlengths 0 15.1k | review_3 stringlengths 807 27.4k ⌀ | rebuttals_3 stringlengths 0 15k ⌀ | review_4 stringlengths 780 22.2k ⌀ | rebuttals_4 stringlengths 0 15.1k ⌀ | review_5 stringclasses 171
values | rebuttals_5 stringclasses 166
values | review_6 stringclasses 25
values | rebuttals_6 stringclasses 24
values | review_7 stringclasses 4
values | rebuttals_7 stringclasses 4
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Understanding Chain-of-Thought in LLMs through Information Theory | Accept (poster) | Summary: The paper introduces an information-theoretic framework to evaluate CoT reasoning in LLMs, quantifying "information gain" at each reasoning step to more accurately assess model performance without requiring annotated data. The approach outperforms existing outcome-based methods in identifying failure modes and providing deeper insights into model reasoning on several benchmark datasets.
Claims And Evidence: I think this theory is good, but it seems to lack robustness analysis because the probability of the model is similar to reward, very fragile and easy to be hacked.
The theory presented in this model is promising, but I believe it could benefit from a more in-depth robustness analysis. Specifically, the way the model's probability functions is similar to the reward mechanism, which may introduce fragilities. This makes the model potentially sensitive to the training settings.
Methods And Evaluation Criteria: This article presents an interesting perspective, but I believe there are several limitations to the assumptions made. For example, the assumption that a single step cannot incorporate multiple operations seems overly restrictive. Additionally, the multiplication scenario does not seem to be directly applicable to the GSM8K dataset, which could impact the generalizability of the findings. Finally, it’s important to note that training on specific-designed datasets is still required to achieve meaningful results, which might limit the approach's practical application.
Theoretical Claims: I analyzed the corresponding theoretical analysis in detail, which is an interesting and reasonable assumption. The only drawback may be that there are a lot of restrictions.
Experimental Designs Or Analyses: None.
Supplementary Material: No supplementary material is provided.
Relation To Broader Scientific Literature: None.
Essential References Not Discussed: - Merrill et al. The expressive power of transformers with chain of thought. ICLR 2023.
- Wang et al. How Large Language Models Implement Chain-of-Thought? Arxiv 2023.
- Hanna et al. How does gpt-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model. NeurIPS 2023.
- Dutta et al. How to think step-by-step: A mechanistic understanding of chain-of-thought reasoning. TMLR 2024.
- Chen et al. Unlocking the Capabilities of Thought: A Reasoning Boundary Framework to Quantify and Optimize Chain-of-Thought. NeurIPS 2024.
Other Strengths And Weaknesses: None.
Other Comments Or Suggestions: None.
Questions For Authors: - The proposed paradigm seems challenging when attempting to explain existing R1-like work, particularly in the context of handling exploratory nonlinear or branched inference paths. How does the model account for these complexities, and are there any strategies for maintaining accuracy and coherence in such cases?
- I noticed that the robustness of the model isn't fully addressed. Given that the model's probability distribution appears similar to a reward function, it seems potentially vulnerable to adversarial manipulation or instability. Could the authors elaborate on any measures taken to improve the model's resilience to such issues?
(If the author answers these questions seriously and discuss more related works, I would consider raising my score to 4 :).)
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: First of all, we would like to thank the reviewer for their time and feedback on our paper. Here below, we discuss the thought-provoking questions raised by the reviewer.
>The proposed paradigm seems challenging when attempting to explain existing R1-like work, particularly in the context of handling exploratory nonlinear or branched inference paths. How does the model account for these complexities, and are there any strategies for maintaining accuracy and coherence in such cases?
Thank you for raising this important question, especially now that reasoning models like R1/O1 have become increasingly popular. Taking a step back, our method is built on an information-theoretic framework that evaluates each step of the chain-of-thought reasoning process. We measure the information gain (IG) at every step to determine whether that step is adding useful information toward predicting the correct final answer. The underlying assumption is that a correctly executed reasoning step should yield a positive IG, meaning it contributes meaningfully to the overall prediction. If a step is executed incorrectly or is unnecessary, the IG will be low or close to zero.
For settings like R1/O1-like reasoning traces, which often involve exploratory or branched inference steps, our framework remains applicable. More specifically, if a model initially follows an incorrect path, our method will indicate that early steps have low or negative information gain. When the model later self-corrects, our method is able to label subsequent steps with a positive IG, effectively indicating that we are on track towards the correct outcome. This demonstrates that our framework is still able to reliably work even in settings like R1 if the steps are clearly delineated. This is also evident from our PRM data experiments, where the steps deemed irrelevant by human annotators have low IG, while the correct steps have a high IG.
We will make sure to add this clarification in the final version of the paper.
> I noticed that the robustness of the model isn't fully addressed. Given that the model's probability distribution appears similar to a reward function, it seems potentially vulnerable to adversarial manipulation or instability. Could the authors elaborate on any measures taken to improve the model's resilience to such issues?
Thank you again for pointing out this important issue of robustness. We have actually conducted experiments specifically designed to assess the robustness of our framework. In particular, we investigated the impact of spurious correlations on the evaluation of intermediate reasoning steps. In our experiments, standard methods such as Math-Shepherd and ORM were affected by spurious signals. They incorrectly inferred the usefulness of intermediate steps when spurious correlations were present.
To be concrete, in Figure 3, we designed an experiment where we injected spurious correlations into step 3, linking its correctness to a spurious feature of the previous step. In this case, we clearly demonstrated that existing methods are unable to detect the precise step where the error occurs, whereas our method is able to pinpoint it. In addition, Table 2 (Arithmetic dataset) further illustrates the lack of robustness of existing methods. When only the final step of the chain-of-thought is problematic, Math-Shepherd flags all steps wrong and ORM provides an uninformative score for each step. In contrast, our approach (IG) directly computes the useful information content in each step by measuring the predictive power added toward the final answer. This design choice makes our method robust to such spurious correlations.
Next, although our supervisor model can be interpreted as a reward model, it differs significantly from conventional ones that rely solely on correctness or preference signals. As mentioned above, our framework aims to predict the final answer tokens and not a binary label, contrary to existing methods.
Lastly, we acknowledge that computing information gain can become challenging when the chain of thought is adversarially long; in those cases, it may be necessary to employ stronger supervisor models to accurately capture the information gain at each step. We only tested with GPT-2 and LLama3. Our experiments, including those with reasonably long chains of thought as seen in the PRM evaluations (Table 2), confirm that our approach remains robust even under these long chain conditions. We plan to explore the analysis of very long chains of thought in future work.
> References
We thank the reviewer for these references and will include them in the final version of our paper.
We thank the reviewer again for their insightful comments to improve our paper. If the above addresses all outstanding questions, we hope that the reviewer would consider raising their score. We remain happy to answer any further questions. | Summary: This paper introduces an information-theoretic framework to evaluate the quality of Chain-of-Thought (CoT) reasoning in Large Language Models (LLMs) without relying on annotated intermediate steps. Their framework quantifies the "information-gain" (IG) at each reasoning step, measuring how much each sub-task contributes to predicting the correct final answer. By leveraging conditional mutual information, the method identifies unidentifiable tasks (i.e., steps the model cannot execute due to insufficient training) and detects failures in CoT reasoning. The authors validate their approach on toy arithmetic, GSM8K, and PRM800K datasets, demonstrating improved accuracy in error detection compared to baselines like Outcome Reward Modelling (ORM) and Math-Shepherd (MS). Overall, this work provides a rigorous, scalable method to analyze LLM reasoning processes.
## update after rebuttal
Thank you to the authors for their efforts in the rebuttal. These responses have addressed some of my concerns, so I will raise my score from `2 Weak reject` to `3 Weak accept`. But I am still confused about whether this paper should be accepted.
Claims And Evidence: Most claims are well-supported by diverse experiments (toy, arithmetic, GSM8K, PRM800K), but the **scalability and generality** claims require further validation:
1. **scalability**: This method necessitates the additional training of a *supervisor model*, which appears to lack generalizability across different types of problems. This requirement imposes a significant computational burden and presupposes the prior construction of a dataset.
2. **generality**: No experimental validation is provided for non-mathematical tasks and other commonly used real-world datasets. (The experimental setup for GSM8K also does not represent typical usage scenarios.) I believe a potential application of this method lies in assisting individuals to identify the aspects of reasoning that a specific LLM is less proficient in. However, the sub-problem types still require manual definition and lack empirical evidence to substantiate their effectiveness.
Overall, the paper provides strong evidence for the effectiveness of error detection. However practical limitations (e.g., computational cost) warrant discussion.
Methods And Evaluation Criteria: Using conditional mutual information to quantify step-wise contributions to the final answer aligns with the intuition that correct reasoning steps should incrementally reduce uncertainty about the solution. This method avoids reliance on costly step-wise annotations, addressing a critical limitation of prior work. Comparing against ORM (outcome-based) and MS (completion-based) highlights the framework’s unique ability to detect errors in correlated or ambiguous scenarios (e.g., Table 1, Figure 4).
Theoretical Claims: The paper presents two key theoretical contributions with formal proofs: Theorem 3.3 (conditional independence after unidentifiable tasks) and Proposition 3.4 (information-gain estimation via cross-entropy loss).
In my view, the proof is generally correct and acceptable. However, the theory is built upon several strong assumptions, about which I have some reservations:
1. `More concretely, after a model's CoT reasoning diverges from the ground truth at step k, every subsequent step adds no additional information regarding the correct final output Y` (Theorem 3.3). In existing long-chain-of-thought methods, it seems common to first reason through some possible plans, which may include erroneous reasoning, and then subsequently make corrections or change the line of thinking. It is evident that not all reasoning following an erroneous step is useless to the final result.
2. Mapping each reasoning step to primitive tasks appears to require substantial manual summarization and seems difficult to generalize in practical applications. Real-world tasks are highly diverse, with subtasks that are varied and not always explicitly generalizable. (For example, if I want an LLM to check the correctness of the proofs in this paper, how should the subtasks of CoT be generalized?)
Experimental Designs Or Analyses: The experimental designs effectively validate the framework’s core claims in controlled and real-world settings. My concern lies in the fact that for Experiment 1, `5.1. Toy data experiments`, it appears that the training process of the supervisor model GPT-2 could have a certain impact on the results. However, the authors did not discuss the effects of different supervisor models, different training epochs, or the size of the training set on the outcomes. (Of course, the authors are not required to address all of the above details; I am merely curious about the influence of the supervisor model.)
Supplementary Material: I reviewed Appendix `A. Proofs` to examine the reasoning process and Appendix `C. Additional Experimental Details` to understand the specific operations referred to by the different $\lambda$ in the main text, among other details.
Relation To Broader Scientific Literature: This paper builds on prior work such as **Process Supervision** [1], which requires costly step-wise annotations, and **outcome-based methods** [2], which rely on final accuracy. The theoretical foundation aligns with formalizations of LLM reasoning but extends it by operationalizing information flow. Furthermore, methods such as ToT[3], GoT[4], and Reflexion[5] are also derivatives of CoT[6], and these methods require repeated sampling, evaluation, and selection for each reasoning step. However, previous papers primarily relied on LLMs for direct evaluation and selection, whereas this method provides an effective way to evaluate from the perspective of information theory.
References:
1. Lightman, H., et al. (2023). Let’s verify step by step.
2. Havrilla, A., et al. (ICML'24). GLoRe: When, Where, and How to Improve LLM Reasoning via Global and Local Refinements.
3. Yao, S., et al. (NeurIPS'23). Tree of Thoughts: Deliberate Problem Solving with Large Language Models.
4. Besta, M., et al. (AAAI'24). Graph of Thoughts: Solving Elaborate Problems with Large Language Models.
5. Shinn, N., et al. (NeurIPS'23). Reflexion: Language Agents with Verbal Reinforcement Learning.
6. Wei, J., et al. (NeurIPS'22). Chain of Thought Prompting Elicits Reasoning in Large Language Models.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Strengths:**
- The idea of using Information Theory to evaluate the quality of each step in Chain-of-Thought (CoT) is novel and does not require costly step-by-step annotations.
- The experiments in the paper demonstrate the effectiveness of this approach. The method outperforms Outcome Reward Modelling (ORM) and Math-Shepherd (MS) in detecting errors in CoT reasoning.
- A mathematical formulation for the framework is provided, offering a comprehensive justification for the theory.
- There is potential for extending this approach to other CoT variants that require evaluation and selection, such as Tree-of-Thought (ToT) and Graph-of-Thought (GoT).
**Weaknesses:**
- **Generalizability:** The method necessitates additional training of a supervisor model, which may not be generalizable across different types of problems. For more details, refer to the `Claims and Evidence` section.
- **Applicability:** The method requires manual definition of sub-problem types. However, many real-world tasks are highly diverse and may not be explicitly decomposable into sub-tasks. Moreover, this decomposition requires significant manual summarization, making it difficult to apply in practical scenarios. For further details, see the `Claims and Evidence` and `Theoretical Claims` sections.
- **Simplistic Experimental Setup:** The datasets and tasks used in the experiments are relatively simple, which may not sufficiently demonstrate the method's effectiveness in more complex tasks. Refer to the `Experimental Designs or Analyses` section for more information.
- **Strong Theoretical Assumptions:** The theoretical proofs are based on several strong assumptions that may not hold in all cases. For more details, see the `Theoretical Claims` section.
Other Comments Or Suggestions: Thank you to the authors for their efforts in the rebuttal. These responses have addressed some of my concerns, so I will raise my score from `2 Weak reject` to `3 Weak accept`.
Questions For Authors: 1. In the experiment described in `Section 5.2`, the authors mention that errors are related to the magnitude of the numbers. Observing `Figure 4`, it is evident that when $3x > 10^5$ or $2y > 10^5$, the computational accuracy of llama shows a very significant decline, even forming a distinct boundary. I am curious about this phenomenon—does llama-8b experience a noticeable performance breakdown when performing calculations with numbers greater than $10^5$? Or is this a deliberate setup by the authors for the purpose of the experiment?
2. I am curious about the computational cost of training the supervisor model. How do the training time and computational resources required for the supervisor model compare to the training of the LLM itself? Additionally, how sensitive is the method to the choice of the supervisor model and the training settings?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful questions and feedback. Below, we address the concerns raised:
> Scalability/Generalization: Additional training required of a supervisor model, training details and construction of new dataset
**Training and Data Setup**: Our method requires additional training of a supervisor model, primarily a small GPT-2 (117M parameters), fine-tuned on about 10,000 samples within 3–6 GPU hours on an A100 (learning rates: 5e-6 to 5e-7, batch size: 64). Although Llama performed slightly better on arithmetic tasks, GPT-2 is generally similarly effective and more cost-efficient on other datasets. Importantly, our approach does not require constructing a separate dataset; it only uses the final answers and the model’s CoT outputs e.g., just 3,000 samples in our PRM800K experiments.
**Generalization**: We acknowledge that this extra training may limit generalizability, a common challenge in reward modeling also faced by methods like ORM, and we appreciate the reviewer’s suggestion; we believe that future work, such as exploring in-context IG estimation techniques, will help address these limitations.
> Empirical validation on non-mathematical tasks
In line with recent works on LLM reasoning [Wang et al., 2024b; Havrilla et al., 2024], we have focused on mathematical datasets since these provide a rigorous testbed for LLM reasoning, and determining the correctness of intermediate steps is unambiguous in these datasets. However, we agree that extending this to other reasoning datasets would be valuable in future.
> Identifying LLM reasoning weaknesses and Llama-8b performance breakdown
Our experiments demonstrate that our framework effectively identifies specific reasoning weaknesses compared to existing methods. In fact, this is how we discovered that Llama is not proficient at adding large and small numbers together. Specifically, we observed a real and notable accuracy drop in Llama-3-8b for arithmetic tasks involving numbers greater than 10⁵, likely due to limited exposure in its training data. We believe investigating this inherent limitation further is an interesting avenue for future work and will emphasize this clearly in the final version of our paper.
> Long CoT with self-correction (R1/O1 style generations)
Thanks for raising this important point. Taking a step back, our method relies on an information-theoretic framework that measures information gain (IG) at each chain-of-thought step. A correctly executed step yields a positive IG, while an erroneous or unnecessary step shows low or near-zero IG. As the reviewer rightly mentioned, in R1/O1-like traces with exploratory or branched inference, our framework actually remains robust: early missteps have low or negative IG, and when the model self-corrects, subsequent steps exhibit positive IG, signaling a return to the correct path.
> It is evident that not all reasoning following an erroneous step is useless to the final result
We would like to clarify that our assumption 3.1 specifically considers the case where LLM's final answer is incorrect. More generally, in cases of self-correcting CoTs, our framework remains applicable as we explained above. For more details, please refer to our first response to Reviewer yLdQ.
> Mapping each reasoning step to primitive tasks appears to require substantial manual summarization
We only use categorization to compute aggregate information gain, when the goal is to obtain a high-level view of an LLM's intermediate reasoning. This categorization is applied only on the evaluation split, not during training. Our PRM experiments show that our framework identifies errors without explicit categorization. In addition, to get annotations, for instance in GSM8K, we can efficiently prompt an LLM to classify each substep into categories like ['Addition', 'Subtraction', …, 'Other']. We appreciate this comment and will clarify it in the final version.
> Theoretical proofs are based on several strong assumptions
We have used these assumptions to rigorously motivate the use of information-gain in our framework. In practice, however, our method remains applicable to real-world datasets (as shown by our PRM experiments) with potentially non-linear/branching CoTs. Additionally, we would like to emphasize that both of these assumptions are intuitively plausible.
Briefly, Assum. 3.1 states that each correct step should add information for predicting the final answer, while wrong/irrelevant steps should not. Likewise, Assum. 3.2 posits that the operations the model applies are restricted to those learned during training. Despite these assumptions, our experiments on uncontrolled datasets (such as PRM800K and the Llama-3-8B arithmetic tasks) show that our framework remains effective in realistic settings where parts of these assumptions may not strictly hold.
Lastly, we hope our clarifications above addressed the reviewer's concerns, and the reviewer would consider increasing their score.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the author's diligent rebuttal! However, I still have some concerns, which I summarize as the difficulties in transitioning from theoretical generalization to practical application.
1. In your rebuttal, you stated:
*"We would like to clarify that our assumption 3.1 specifically considers the case where the LLM's final answer is incorrect."*
This appears to introduce a new assumption into your theory. Would your theory still hold in cases where the final result is correct? Of course, you have provided extensive empirical evidence to support your argument, but this creates a gap between theory and experiment.
2. Perhaps I did not fully understand, but I still have doubts about the assumptions in Lines 146-149 (right) and Lines 213-216 (left) of the paper. In your theoretical framework, you assume that steps following the first incorrect step do not contribute to information gain. However, in practice, you suggest that under self-correction(i.e., correcting errors after an initially incorrect step and ultimately arriving at the correct result), your method can still capture information gain. While I agree with the effectiveness of your method, it appears contradictory to your theory.
Additionally, this assumption seems to contradict the principles behind the construction of the PRM800K dataset. I have studied the PRM800K dataset, which explicitly considers the correct reasoning following an incorrect step in TN-class problems as contributing to information gain. See [this reference](https://openai.com/index/improving-mathematical-reasoning-with-process-supervision/). Of course, different assumptions are acceptable, and I even agree more with yours, but this creates a conflict with your experiments on the PRM800K dataset. Specifically, do you classify correct reasoning after an incorrect step as -1 according to your assumption, or do you follow OpenAI’s annotation principle? I suggest removing the experiments on this dataset.
3. **Empirical validation on non-mathematical tasks**
Even if experiments are conducted solely on mathematical datasets, errors are not limited to mistakes in arithmetic operations such as addition and multiplication. Misunderstanding the problem, leading to incorrect equation formulation (which is more common), and failing to maintain contextual consistency can also be sources of errors. Additionally, your method requires manually defining sub-problem types. How do you plan to exhaustively enumerate these sub-types in practical applications?
These are just some of my key concerns. Please forgive my ignorance, and if I have misunderstood anything, kindly point it out. For now, I will maintain my score.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the reviewer’s continued engagement and feedback, which greatly helps in clarifying and refining our manuscript.
> [...] This appears to introduce a new assumption into your theory [...]
We believe there is a misunderstanding regarding our assumption 3.1, and we acknowledge that our statement in the rebuttal that the reviewer quoted above was not specific enough.
To clarify, our Assumption 3.1 does not refer to general cases where a substep may be executed incorrectly by an LLM during the reasoning process. Instead, this assumption specifically considers the case where a step that is **necessary** to solve a problem is **unidentifiable** in the training data, i.e., no composition of learned tasks can yield that task. In such cases, we assume that once the model diverges at such a step, no subsequent steps add further information toward the correct final output.
On the other hand, in cases where an LLM self-corrects a wrong CoT step, this step would not be considered unidentifiable (as the model was able to find some composition of learned operations to execute this step correctly). Similarly, if the initial steps considered were irrelevant to the solution, these steps would be deemed unnecessary. In either case, the fact that we arrive at the correct reasoning after initial exploration is not at odds with our existing assumptions, and hence, we do not require any new assumptions to accommodate this.
To formalise this using our current framework, suppose the correct reasoning path to the final answer $Y$ is:
$$
X_0 \rightarrow X_1 \rightarrow \dots \rightarrow X_T \rightarrow Y
$$
If the model temporarily diverges at step $X_t$, taking a path through some incorrect or exploratory step $Z$, but then returns correctly to $X_t$, the self-corrected path can be represented as:
$$
X_0 \rightarrow \dots \rightarrow X_t \rightarrow Z \rightarrow X_t \rightarrow X_{t+1} \rightarrow \dots \rightarrow X_T \rightarrow Y
$$
In this case, standard conditional independence results ensure that:
$
Y \perp Z \mid X_t \\, .
$
In other words, the information-gain at $Z$, should be 0. However, after this misstep, the information-gain may increase once the model is on the right track. Hence, our theoretical framework accommodates such self-corrections without contradiction.
We will ensure that this nuance is explicitly laid out in the final version of our paper.
> I still have doubts about the assumptions in Lines 146-149 (right) and Lines 213-216 (left)
We acknowledge that the phrasing in lines 213-216 (left) in our paper is currently ambiguous. In the final version, we will revise this to:
"More concretely, if at step $k$, a model encounters a reasoning step which is **necessary** for obtaining the correct answer and **unidentifiable** in the training data, CoT reasoning diverges from the ground truth at this step and every subsequent step adds no additional information regarding the correct final output Y"
This revised version explicitly clarifies that the reasoning chain diverges in the case where a **necessary step is unidentifiable**, and hence rules out cases where an LLM makes an unnecessary step and/or self-corrects its mistake.
> Comments on PRM800K Dataset
As explained above, our theoretical framework is consistent with our PRM800K dataset experiments. In particular, after an incorrect reasoning step, our framework will only attribute zero information gain to all subsequent steps when the erroneous task is both necessary for solving the question and also strictly unidentifiable. Otherwise, as we show in our formalisation above, the information-gain at the incorrect step is expected to be 0 but may be positive for subsequent steps. As such, we follow OpenAI's annotation principle: correct reasoning steps are labelled as +1 while incorrect steps receive a label of -1 (regardless of the order).
> Enumerating sub-tasks in practical applications
Our method is designed as an auditing tool for evaluating a model's CoT steps. Importantly, users can define the sub-problem categories based on their specific domain and evaluation goals rather than requiring an exhaustive pre-definition. The key is that there must be a clear, unambiguous mapping between substeps and categories (each substep should belong to exactly one category) to avoid ambiguous inferences. Moreover, such a categorization is only needed if the goal is to evaluate models on specific kinds of reasoning steps (such as problem understanding, specific mathematical operation, etc). In fact, in such cases, categorization would also be needed for other kinds of reward modelling approaches (ORM/PRM/MS). Conversely, if the objective is simply to detect reasoning errors at a sample-wise level, then no categorization is required.
We appreciate the reviewer's comments and hope that the above has addressed all the concerns raised. We will integrate these clarifications into the final version to enhance the clarity of our paper. | Summary: This paper proposes a novel information-theoretic approach to evaluate Chain-of-Thought (CoT) reasoning in LLMs without annotated intermediate steps. The proposed framework can identify erroneous reasoning across diverse settings and consistently outperforms baselines.
Claims And Evidence: The statements are supported by empirical evaluations (Sect. 5) and theoretical framework (Sect. 3).
Methods And Evaluation Criteria: The proposed method is established under somewhat strong assumptions (Assumption 3.1 & 3.2), which may be violated in real-world scenarios.
Theoretical Claims: I only check the correctness of the assumptions and main theorems in submission.
Experimental Designs Or Analyses: The empirical evaluations are not sufficient to verify the adaptability of the proposed method. Specifically, in real-world dataset, CoT reasoning tasks exist with diverse structures, which may not follow the constrained settings present in the synthetic and real datasets.
Supplementary Material: Yes, I reviewed Sect. B.
Relation To Broader Scientific Literature: The proposed evaluation metric can be used to identify the incorrect reasoning steps and can thus lead to a high false-positive rate in certain scenarios. This research direction is valuable in some related areas, such as CoT data generation.
Essential References Not Discussed: The essential related works, in my understanding, have been involved into the discussion.
Other Strengths And Weaknesses: Strengths:
1. This paper is well-written and easy to understand.
2. The proposed evaluation method is novel, which differs from existing methods by quantifying the information gain at each reasoning step, rather than relying only on the final answer.
Weaknesses:
1. The preset assumptions (3.1 & 3.2) are strong (main concern).
2. The experimental design cannot support the claims well.
Other Comments Or Suggestions: Some typos, e.g., in the equation (line 211, right column), it would be better using $\approx$ rather than $=$.
Questions For Authors: 1. The assumptions (3.1 & 3.2), in my opinion, are strong. In reality, a global optimal CoT exists with very low information gains at the early steps. In such case, the optimal CoT may be overlooked by the proposed method.
2. It would be convincing if the authors could give more empirical evaluations on real datasets.
3. It is not clear to if the definition "Uidentifiability" could be used for interpreting few-show generalization.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: First of all, we would like to thank the reviewer for their time and constructive comments to improve our paper. Here below we clarify all the questions raised by the reviewer.
> The empirical evaluations and adaptability to diverse CoT structures
Our framework evaluates each reasoning step using the information gain (IG) metric, measuring its contribution toward the final answer. Although we present our method with linear chains-of-thought for clarity, it applies equally well to non-linear structures. For example, in O1/R1-like reasoning where multiple paths are explored, early erroneous steps yield low or negative IG, while correct steps show positive IG. Our experiments on the MATH/PRM dataset confirm that steps deemed uninformative by human annotators consistently exhibit low IG, demonstrating that our method robustly adapts to varied chain-of-thought structures.
> The proposed method is established under somewhat strong assumptions,
We appreciate the reviewer’s insightful comment. Firstly, we would like to clarify that we have used Assumptions 3.1 and 3.2 to rigorously motivate our use of information-gain for failure mode detection in LLM CoTs. In practice, however, our methodology remains applicable to real-world datasets (as shown by our PRM data experiments), where it outperforms the ORM baseline. Secondly, we emphasise that these assumptions are formalizations of phenomena which make intuitive sense in practice. Specifically:
Assumption 3.1:
This assumption formalizes the idea that each correct reasoning step should contribute additional information for predicting the final answer. In our framework, a positive information-gain (IG) indicates that a step is informative, whereas a low or negative IG suggests that the step is either erroneous or superfluous. Importantly, this criterion is scalable and remains applicable to long chains-of-thought (CoTs). Whether the chain is short or long, if each step is clearly delineated, our method evaluates each step based on the IG metric. As demonstrated in our PRM experiments, even in cases where early steps may show low IG due to initial missteps, the framework still identifies the transition to correct reasoning later in the chain.
Assumption 3.2:
While this assumption might seem strong, it is intuitively grounded in the operational characteristics of large language models. It essentially posits that the operations the model applies during reasoning are restricted to a set of primitives (or their compositions) learned during training. This is a reasonable expectation, as models are generally more effective when they perform tasks that are similar to those encountered during training.
Despite the theoretical rigor of these assumptions, our experiments on uncontrolled datasets (such as PRM800K and the Llama-3-8B arithmetic tasks) demonstrate that our framework remains effective in realistic settings where parts of these assumptions may not strictly hold.
In summary, while these assumptions are used to rigorously motivate our framework, our experiments indicate that the methodology is robust and practically applicable beyond the constrained formal setting.
> Definition "Unidentifiability" for interpreting few-show generalization.
We thank the reviewer for this interesting suggestion. Although our focus isn’t on few-shot generalization, our method can be extended to few-shot settings. In our formulation, unidentifiability measures whether a task lies outside the span of learned primitives. In a few-shot setting, if adding examples increases the information gain for a reasoning step, it suggests that the examples help the model perform that step correctly; if not, the task remains unidentifiable. We see this as a promising direction for future research which is outside the scope of this paper.
> It would be convincing if the authors could give more empirical evaluations on real datasets.
We would like to clarify that, in addition to our synthetic data and controlled GSM8K experiments, our submission also includes evaluations on real-world data:
- PRM800K Dataset: This dataset covers real mathematical problems from high school to post-graduate levels. Our experiments on PRM800K show that our methodology predicts the correctness of intermediate steps more accurately than the ORM baseline, making our IG a cost-effective proxy for human-annotated labels.
- Llama-3-8B Arithmetic Experiment: Although generated by us for a specific arithmetic task, this uncontrolled experiment demonstrates that our method correctly identifies errors, specifically, misapplications in the final addition step. In comparison, the baselines either provide uninformative results (ORM) or exhibit a high false-positive rate (Math-Shepherd).
Together, these experiments underscore the practical utility of our approach in realistic, uncontrolled settings.
We hope that the above have addressed all the reviewer's questions and that the reviewer would consider raising their score. | null | null | null | null | null | null | null | null |
Feature Importance Metrics in the Presence of Missing Data | Accept (poster) | Summary: This paper tackles the challenge of determining feature importance in realistic scenarios where data is missing. It is the first to explicitly formulate this problem and in doing so, introduces FMIG, a novel gradient-based metric that quantifies how small increases in the frequency of feature measurement can improve prediction performance. The approach is backed by theoretical derivations and validated through synthetic experiments under various missing data scenarios.
## update after rebuttal
After reading the other reviewers' comments and the authors' rebuttal, I have decided to keep my original score. I appreciate the authors’ thoughtful response to my review. While I still believe it may be possible to obtain or semi-generate a more realistic setting to evaluate the proposed method, I think the paper meets the current experimental standards for more theoretical works at this conference and should be accepted.
Claims And Evidence: The paper clearly lays out its theoretical derivations and presents synthetic experiments that support its main claims. It also clearly defines the theoretical settings in which the method applies (MAR vs. NMAR). However, the claim that FMIG can effectively guide data acquisition in practice is less supported, since all experiments are synthetic. The assumption that the gradient approach can accuractly capture the effects on data in the real world need to be validated.
Methods And Evaluation Criteria: The methods and evaluation criteria are reasonable for an initial validation of the proposed approach. The paper’s synthetic experiments help illustrate how both the full data and observed data metrics, as well as the FMIG, behave under controlled conditions. However, further testing on benchmark or real-world datasets would be necessary to confirm the practical applicability of these methods.
One useful suggestion would be to induce synthetic missingness in real-world data by syntactically perturbing the missingness mechanism. Additionally, it would be beneficial to run an experiment with real dataset where features are missing and where there is prior knowledge of the expected effects. This would allow the authors to assess whether the insights provided by FMIG and the other metrics align with known expectations, even without direct access to the full data or comparison to the full data scenario.
Theoretical Claims: I have briefly checked the main claims and they seem valid. In particular, the proofs related to the identification of FMIG and the derivation of the LOCO metrics appear solid. The theoretical ground is strong, and the settings and assumptions are clearly stated and highlighted as expected.
Experimental Designs Or Analyses: As I mentioned in the Methods and Evaluation section, the synthetic experiments are well demonstrating the differences between full and observed data scenarios, as well as the insights that could be captured by FMIG. However, further testing on real-world data would help validate these findings in practice.
Supplementary Material: I only reviewed the main ideas behind the proofs provided and made sure all details for reproducing the experiments are available
Relation To Broader Scientific Literature: In my view, this paper makes a very important contribution, as the entire field of feature importance has generally focused only on the case where all data is always available which is very often not true in practice. The examples provided, especially in medical contexts, are highly relevant and highlight the need for further research in this area. Moreover, the authors offer an initial solution backed by a strong theoretical foundation, which I believe could pave the way for both theoretical and empirical research with significant impact.
Essential References Not Discussed: I'm not familiar enough with the latest works in that field to provide a solid review or opinion on this part.
Other Strengths And Weaknesses: In addition to the strengths mentioned above, I would like to emphasize one more strength: the use of the MAR formulation is well justified as it provides a robust model for real-world scenarios where sampling decisions are often based on the outcomes of other features. Ideally having a method for NMAR could be very useful, but as stated much harder to tackle.
Other Comments Or Suggestions: -
Questions For Authors: -
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer's thoughtful review and positive feedback. We recognize the value of conducting additional experiments, particularly those utilizing datasets with established prior knowledge of expected effects, to further validate our results. Unfortunately, we do not have access to such datasets. As a result, we will explicitly address this limitation in the Conclusion section, highlighting the potential for future research to explore these opportunities. | Summary: ## update after rebuttal: 2 --> 3
When applying feature importance (FI) methods as explanation techniques for machine learning (ML) models, the presence of missing data is typically not considered. The authors highlight the issue that missing values can impact FI method results and demonstrate this using the leave-one-covariate-out (LOCO) approach. They adapt LOCO to account for missing values by incorporating the missingness process into the computation. Furthermore, the authors introduce the new "feature measurement importance gradient" (FMIG) as a novel metric to assess how the increased measurement or observation of a feature would influence predictive performance. They support their claims with several synthetic examples.
Claims And Evidence: While I agree that the issue the authors raise here is an interesting one, especially for applied data science work, the paper is often not well written (especially but not limited to formal notation, see further below) and the experimental results are not extremely insightful.
Methods And Evaluation Criteria: LOCO is examined from three different perspectives: (1) full data LOCO, (2) observed data LOCO, and (3) leave-one-covariate-unmeasured (LOCU), with a primary focus on the first two.
The distinction between these perspectives is not entirely clear, leaving some open questions. For instance, both LOCO^FD and LOCO^OD can be approximated by imputing missing values and differ only in the imputation method used. This raises the question of how to distinguish between simple and non-simple imputation methods.
Furthermore, concrete application examples illustrating when each perspective is of particular interest would be valuable, especially in differentiating between LOCO^OD and LOCU.
Theoretical Claims: I have reviewed the proof of Lemma 4.1 in the appendix. Unfortunately, I am only partially familiar with Robins' g-formula, which prevents me from fully understanding this (fundamental) step.
The definition of the FMIG is motivated by literature in the appendix (A.2). This should be part of the main text. A more detailed derivation of the formulation and construction of the newly introduced measure (FMIG) should be provided, at least in the appendix.
Experimental Designs Or Analyses: General comment w.r.t. exp setup: I am not exactly sure why the authors mix in time-dependent components in their experiment without discussing them before in the paper (with only a short ref to the appendix where this is covered more). This is certainly not "illegal" but somewhat unmotivated and / or one wonders why the i.i.d. scenario is not explored at all. Also, the experiments are not very "large" (I do not mean data size here!), so not very many different scenarios are explored, so it is unclear how much the results generalize.
Experiments 1 and 2: The experiment setup seems ok to me, but the experiments pretty much say what we knew already? Missingness reduces the predictive value of features. And "informed missingness", well…. "informs", so it influences our loss-based LOCO result.
Experiment 3: I would rather call this more or less a "sanity check" for the FMIG measure. Which is fine. That LOCO cannot provide the same information seems clear, as it was never constructed for this. LOCO assesses a feature's importance, FMIG reflects the importance or added value of increasing a feature’s measurement probability.
Experiment 4: The presentation is somewhat confusing, as it initially follows the scenario from Experiment 1 but is later adapted in the results section. Additionally, the ground truth is unclear (as it differs between Figures 2A and 2B).
Detailed information about the experimental setup is given in the appendix, but primarily for experiments 1+2; experiments 3+4 lack detailed information.
Supplementary Material: I read through appendices A-D and F in detail and skimmed appendix E. Code should / must be given for reproducibility.
Relation To Broader Scientific Literature: The authors state that this is the first work to evaluate LOCO in the context of missing values. I am also not aware of any other research addressing this topic. The paper cites works from both the feature importance and missing values literature, evaluating the advantages and disadvantages of various approaches in the missing values research field.
Essential References Not Discussed: See above.
Other Strengths And Weaknesses: The paper has severe problems in notation and description. This makes it sometimes very hard to follow, and the burden on the reader too heavy. One should not have to "guess" things means. Some examples:
Cross-entropy is not defined on "labels" x "labels", but "label" x "probs". For the same reason, the classifier f_cl also cannot output labels but must output probs.
It is often confusing that the authors conflate the "training procedure" of the model with the "predictor" of the model. I know that it can be annoying to distinguish between the two in notation, but it would help.
Just look at formula (2): The authors write f(X_{(1), -j}). First of all, by definition, f would always take R. Here, it does not. Then, via "-j" the input loses one dimension. The definition of f does not allow this. I can GUESS what is LIKELY meant here —> But I should not. And this makes other areas of the paper even harder to follow.
Why does the classifier also take R? Additionally, the features are defined as "R union ?".
This is redundant? Also, is training or prediction meant here...?
X_(0) is never defined but is used later when we do X_(pi).
There is a difference in the missingness-process between the observed missingness label \in {0,1} and its associated probability. I had the impression that the authors conflate the two.
Other Comments Or Suggestions: Appendix F, line 938: "eps_i" is used twice here, in the initialization line (2nd) and the recursion (1st). I would GUESS these should be independent noise variables? So different? Defined like this, we convex-combine the same stuff and the recursion would never change the value of X^t ??
According to Table 1, Appendix F.4, the probability of observing X3 in experiment 1 is 0.25; in the experiment description in section 5.2 for experiment 1 it is said to be 0.3. Which number is correct?
Further suggestions for improvements:
A. The role of imputation in missing data should be mentioned in the introduction.
B. The approach is only formulated for classification. Why? I see no reason for that restriction and the authors do not even discuss it.
C. I would highly appreciate a simple example clarifying FMIG.
Questions For Authors: 1. Section 3.4.1 (Unbiased Estimation of the Full Data LOCO Metric) suggests that unbiased estimation is impossible. A conclusive statement reinforcing this would help. What does unbiased refer to?
2. As written above, the distinction between Observed Data LOCO and LOCU is not entirely clear. Providing examples for each scenario would help clarify how the expectation values differ. How do the LOCO measures relate to each other if you have a dataset that contains no missing values? Is LOCO^{OD} = LOCU?
3. If a feature has nearly never been observed, can FMIG still assign it a measurement importance? If yes, how? Is it assumed that the learner has encountered such data points during training? Can we infer a feature importance value from this, or would it simply be a measurement importance? I would expect the latter.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their detailed feedback, which has helped us to improve our paper.
## Methods and Evaluation Criteria:
For clarification on estimation, imputation, and the definition of bias, please refer to our response to reviewer rtga. To address the question about a distinction between the metrics, we have provided a simplified diagnostic scenario involving heart attacks and troponin levels in the introduction. Please find a slightly shorter version below:
### Scenario 1: Biological Relationship Analysis (Full Data LOCO)
A biomedical researcher aims to understand the intrinsic biological relationship between troponin levels, measured via a blood test, and heart attacks, independent of clinical testing protocols. The full data $LOCO$ metric is ideal to assess the explanatory power of troponin values, as it isolates the pure biological relationship, unaffected by real-world data missingness.
### Scenario 2: Clinical Prediction Model Development (Observed Data LOCO)
A machine learning engineer develops a heart attack prediction model and evaluates the relevance of troponin levels under current real-world missingness patterns. The observed data $LOCO$ metric is used to assess feature importance, ensuring the model aligns with the hospital’s existing testing protocols.
### Scenario 3: Testing Protocol Optimization (FMIG and LOCU)
After deploying the prediction model, an analyst seeks to optimize troponin testing by evaluating how different testing frequencies affect diagnostic performance. They use FMIG to quantify the benefits of increasing test frequency and LOCU to assess the impact of discontinuing troponin testing entirely.
## Theoretical Claims:
We have integrated the relevant appendix sections (A.2) into the main body and expanded the derivations of the FMIG formulation.
## Experimental Design and Analysis:
### 1. Experiment Design:
We chose a time-series setting due to its prevalence in medical prediction tasks. Presenting it directly in the main paper would have been overly complex.
### 2. Experiments 1 and 2:
The reviewer mentions that the experiment results confirm what is expected. We agree and want to emphasize that our goal is not to provide novel insights on missingness but to underscore the crucial difference between observed and full data feature importance metrics, challenging current practices that misapply estimators.
### 3. Experiment 4:
Experiment 4 mirrors Experiment 1 but introduces stronger missingness. We've clarified this in Table 4 of the appendix and revised the description in the main text.
The differences in ground truth between Figures 2A and 2B stem from variations in missingness patterns influencing classifier training. Our ground truth reflects feature importance for a classifier trained on the available data which differs between experiments.
## Strengths and Weaknesses:
We thank the reviewer for pointing out errors in our notation. The following adjustments have been made:
We updated the classifier and loss function definitions. The classifier is now $ f_{cl} \colon (\mathbb{R} \cup \lbrace "?" \rbrace )^d \times \lbrace 0,1 \rbrace ^d \to [0,1]^K $, and the loss function is $ L \colon \{0, 1, \ldots, K-1\} \times [0,1]^K \to \mathbb{R} $.
To clarify when a feature is excluded, we define $ X_{-j}, R_{-j}, X_{(1),-j} $ as the reduced dimension observed features, missingness indicators, and ground truth features, and the classifier excluding feature $ j $ is $ f_{cl, -j} \colon (\mathbb{R} \cup \lbrace "?" \rbrace )^{d-1} \times \lbrace 0,1 \rbrace ^{d-1} \to [0,1]^K $.
While we acknowledge the redundancy of $ R $ in the classifier, we kept it to highlight the classifier's dependence on missingness. We now explicitly include $ R$ in all classifier definitions, including the full data case.
Regarding $ X_{(0)} $, we added a reference for readers unfamiliar with potential outcomes: (Rubin, 2005).
Finally, in response to the comment about $ \pi $'s mapping, we corrected the definition of $ \pi$ to represent probabilities. It now maps into $ [0,1]^{2^d}$, rather than $ \lbrace 0,1 \rbrace ^d$.
## Questions for Authors:
### 1. Distinction Between Metrics:
If no missingness exists, $LOCO^{OD} = LOCO^{FD}$ since $R = \vec{1}$ and $X = X_{(1)}$. However, $LOCU$ remains generally unidentifiable without knowledge of how leaving feature $j$ unmeasured affects other measurements.
### 2. FMIG when Features Are Rarely Observed:
If a feature is never observed, it is impossible to estimate its measurement impact, violating positivity assumptions. FMIG is based on an odds ratio, so if initial measurement probability is zero, the intervention effect is also zero. This does not mean increased measurement is unhelpful—just that our definition precludes estimation in such cases. When a feature is rarely observed, identification holds, but finite-sample issues can cause non-robust estimates (see Figure 2).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their thoughtful rebuttal and appreciate their planned changes. Nonetheless, I still have open issues:
1. Regarding your answer to reviewer sya3: In the proof of the positivity assumption, it must also hold that $\\delta\_j \\cdot \\pi\_j + 1 - \\pi\_j > 0$ (this holds, but it should be written down).
Apart from this, I would appreciate it if you could give examples for learners taking R as an input.
2. The examples for $LOCO^{FD}$ and $LOCO^{OD}$ are useful. Still, my above-mentioned point (“The distinction between these perspectives is not entirely clear, leaving some open questions. For instance, both $LOCO^{FD}$ and $LOCO^{OD}$ can be approximated by imputing missing values and differ only in the imputation method used.”) is not fully addressed. The distinction is clear from the application side, but the mathematical definition is still inconclusive insofar as both LOCO estimations allow for imputation.
3. The authors did not comment on my claim that LOCO provides different insights into the data than their newly introduced measure, FMIG. I think this is an important point to mention since users of FMIG could accidentally draw wrong implications otherwise.
4. Experiment 4: The authors write that the difference in ground truth LOCO importance stems from the missingness in the data. In my definition of a ground truth LOCO importance, it would be based on the *true* underlying data-generating function. It should not differ since the features still stem from the same distribution, whether observed or not. If the “ground truth” depends on the underlying data, it must clearly be stated as an approximation that - in this experiment - would distort the comparison.
5. Additionally, for reproducibility, the code must be published.
If the authors respond to these aspects and adopt their manuscript accordingly (esp. points 3-5), I would be willing to improve my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for engaging in the discussion and for your constructive feedback.
Please find our responses to your comments below.
## Comment 1:
We agree and have added the additional statement that $\delta_j \cdot \pi_j + 1 - \pi_j > 0$ holds, for completeness.
Regarding examples for classifiers taking $R$ as an input, we will add two references regarding the benefit of doing so: [1,2]
[1] Van Ness, Mike, et al. "The missing indicator method: From low to high dimensions." Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2023.
[2] Singh, Janmajay, Masahiro Sato, and Tomoko Ohkuma. "On missingness features in machine learning models for critical care: observational study." JMIR Medical Informatics 9.12 (2021): e25022.
## Comment 2:
We believe that our paper, along with our response to reviewer rtga (which will be incorporated into the manuscript), clearly outlines which imputation methods yield unbiased estimators for the $LOCO^{FD}$ and $LOCO^{OD}$ metrics. However, we agree that a clearer interpretation of these results would enhance the paper, and we provide this below, which we will also include in the revised manuscript.
When using imputation for the estimation of $LOCO^{FD}$ and $LOCO^{OD}$, the missing values must be imputed in a manner consistent with the classifier’s respective "working conditions."
For $LOCO^{FD}$, the classifier operates under full data availability. Therefore, missing values must be imputed using samples from the full data distribution. This is achieved by using observed values and imputing missing values according to a density that models the ground truth values of $X_{(1)}$. When done correctly, $LOCO^{FD}$ is evaluated on a dataset that mirrors the original dataset as if no missingness had occurred. In this case, the classifier can leverage the imputed values to potentially improve performance.
For $LOCO^{OD}$, evaluation occurs under conditions where missingness is present. Since the observed data already reflects the classifier's working conditions, no imputation is required. However, one may opt for impute-then-regress classifiers, where imputation serves purely as a means to handle missing values elegantly. In such cases, the imputation step does not introduce additional information beyond what is already present in the observed input features.
## Comment 3:
We assume the reviewer is referring to their comment on Experiment 3.
We apologize for not addressing this comment directly in our previous response. At the time, we felt it reiterated a point already made in the paper. However, we now recognize that this distinction should be emphasized more clearly for the reader.
To address this, we have added the following sentence to the discussion section (line 427, right column):
"FMIG is thus not an alternative to the observed data or full data LOCO metrics, but an additional tool to assess the importance of a feature under a change in measurement probability."
## Comment 4:
We define the feature importance metric as the feature importance given a classifier. This metric accurately represents the true feature importance only if the classifier correctly models the actual probability of the label given the set of features and missingness indicators.
However, obtaining such a classifier is non-trivial—particularly in the full-data setting—as we discuss in Remark 3.2 (Appendix C) and demonstrate in our experiment in Appendix G.
In our experiment, the term ground truth refers to the feature importance computed using a (potentially imperfect) classifier. Since any classifier must be learned from the observed data, and we aim to evaluate the metric for the same classifier using different estimation methods (as shown in Figure 2), we necessarily train the classifier on the observed data. Because the observed data varies between experiments, the corresponding classifier—and consequently, the ground truth feature importance estimates—also differ across experiments.
As expected, the losses are (slightly) higher in Figure 2B because the corresponding observed data contains more missingness, making it more challenging to train a classifier.
In conclusion, the observed differences do not arise because the ground truth is an approximation, but rather because it is defined based on a classifier that differs between experiments.
We acknowledge that this distinction is complex and may not be immediately apparent to the reader. To mitigate potential confusion, we have added an explanation in the paper.
## Comment 5:
We apologize for not addressing this in our initial response. The experiments were conducted as part of a broader project on active feature acquisition, which is currently under preparation and scheduled as a software paper in September 2025 as the last publication of my PhD thesis. We hope the reviewer understands that, due to this, we are unable to share the code publicly at this time. | Summary: This paper introduces a conceptual framework that distinguishes between feature importance methods under missing data: (1) full-data feature importance evaluates each feature's importance if all feature values were present; (2) observed-data feature importance evaluates each feature's importance based on the actual observed data, which include missing features; and (3) feature measurement importance evaluates each feature's importance based on model improvement when the feature is measured with a higher probability.
Empirically, this paper instantiates LOCO (leave-one-covariate-out) for full-data and observed-data feature importance, as well as introduces the FMIG (Feature Measurement Importance Gradient) for feature measurement importance. With synthetic datasets, the authors demonstrate that these three methods offer complementary insights. Finally, the authors demonstrate that violations of the positivity assumption can introduce bias for full-data feature importance.
Claims And Evidence: - In 3.4.2, it is claimed that mean imputation for observed-data feature importance is unbiased. It is unclear what bias means for observed-data feature importance. Hence, it is also unclear why predictive information can lead to bias.
- In 3.4.3, it is strongly claimed that conditional mean imputation introduces bias for both full-data and observed-data feature importance. For observed-data feature importance, I have the same concern as in the first bullet point. For full-data feature importance, given how strong the claim is, although the intuitive explanation makes sense, more solid theoretical or empirical evidence should be presented to support the strong claim.
- For 3.4.3, the authors should discuss whether these biases are also present for methods that consider feature interactions (e.g., the Shapley values). Otherwise, the authors should be clear that their exposition focuses on LOCO.
- Overall, the claims about biases should be strengthened by clearly defining what biases mean for full-data feature importance, observed-data feature importance, and even feature measurement importance. Also, it should be made clear whether those claims are specific to LOCO or applicable to feature importance methods generally.
Methods And Evaluation Criteria: The problem focuses on demonstrating different qualitative insights gained from full-data LOCO, observed-data LOCO, and FMIG. Hence, the qualitative evaluation using synthetic datasets makes sense.
Theoretical Claims: I checked all the proofs.
- In lines 790-800 of the proof for Lemma 4.1, it's only shown that $\pi_{j, \delta_j} = 0$ implies $\pi_{j} = 0$, and that $\pi_{j, \delta_j} = 1$ implies $\pi_{j} = 1$. Does $\pi_{j, \delta_j} \in (0, 1)$ implies $\pi_j > 0$? This is also necessary to prove that positivity holds.
- If the above issue is addressed, then the proof for Corollary E.1 seems correct to me.
Experimental Designs Or Analyses: I checked all the experimental designs and analyses, and I found no particular issues.
Supplementary Material: I reviewed the all sections of the Appendix except Section A.
Relation To Broader Scientific Literature: Although prior methods exist for computing feature importance scores under missing data, this paper provides a clear conceptual framework that distinguishes between the purposes of those methods (i.e, full-data feature importance, observed-data feature importance, vs. feature measurement importance). This conceptual framework can bring clarity when new methods are developed to address feature importance under missing data.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Strengths**
- The formalism that distinguishes full-data vs. observed-data feature importance is clear and empirically useful (as demonstrated by the synthetic experiments).
**Weaknesses**
- The experiments are done for only one set of data-generating and missingness-generating processes. Other generation processes should also be included to make the empirical claims stronger.
- The authors noted that FMIG provides descriptive insights for feature measurement importance. It is unclear how such descriptive insights are useful in practice. Isn't the main point of feature measurement importance to prescribe which features to measure more often?
Other Comments Or Suggestions: - Experiment 4 should be presented before Experiment 3. This will complete the analyses for full-data and observed-data feature importance before introducing additional insights from FMIG.
- The full-data classifier doesn't take the missingness indicators as inputs. This should be noted in the subsection labeled **Classifier** in 3.1 for clarity.
- In Equation (5), it is unclear how $\pi_1(R_1 | R_{-0}, X_{-0})$ is defined. I think a special case is needed when $i=1$.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's positive feedback and valuable suggestions for improvement. Please find our responses below.
## Claims and Evidence
### Definition of Bias
We define $\theta$ as an estimand and $\hat{\theta}$ as its estimator. The bias of the estimator is given by:
\begin{equation}
\text{Bias}(\hat{\theta}) = \mathbb{E}[\hat{\theta}] - \theta.
\end{equation}
### Generalization of Metrics
Our results extend to a broader class of full data feature importance metrics:
\begin{equation}
\theta^{FD} = \mathbb{E} \left[ g( \lbrace f_{cl,s}(X_{(1),s}, R_{s} = \vec{1}) : s \in \mathcal{S} \rbrace, Y ) \right],
\end{equation}
where $g$ can be any function and $\mathcal{S}$ is the set of all feature subsets. This definition includes a wide range of metrics, including LOCO and Shapley values. Notably, Shapley values can be expressed as a weighted average of LOCO values across submodels (Verdinelli and Wasserman, 2023).
Additionally, we consider observed data feature importance metrics $\theta^{OD}$ of the same form, with $X_{(1),s}$ and $R_{s} = \vec{1}$ replaced by $X_s$ and $R_s$.
### Full Data Feature Importance Metrics
As requested, we demonstrate that conditional mean imputation results in biased estimation of full data feature importance metrics. Our analysis is based on the formulation used for the unbiased MI estimator:
\begin{equation}
\theta^{FD} = \sum_{X_{(1)},Y} g( \lbrace f_{cl,s}(X_{(1),s}, R_{s} = \vec{1}) : s \in \mathcal{S} \rbrace , Y ) p(X_{(1)},Y)
= \sum_{X_m, X_o, Y, R} g( \lbrace f_{cl,s}(X_{m \cap s}, X_{o \cap s}, R_{s} = \vec{1}) : s \in \mathcal{S} \rbrace, Y ) p(X_m|X_o,Y, R) p(X_o,Y, R),
\end{equation}
where $p(X_m|X_o,Y, R)$ represents the imputation density. When the imputation model is learned, one can apply Monte Carlo integration to obtain the unbiased estimator.
Conditional mean imputation, however, simplifies the above expression to:
\begin{equation}
\theta^{FD} \approx \sum_{ X_o, Y, R} g( \lbrace f_{cl,s}( \mathbb{E}[X_{m \cap s}|X_o,R], X_{o \cap s}, R_{s} = \vec{1}) : s \in \mathcal{S} \rbrace, Y ) p(X_o,Y, R),
\end{equation}
which assumes that the expectation operator can be pulled inside the functions $g$ and $f_{cl}$. This assumption is only valid if both functions are linear, which is generally not the case. Consequently, the use of conditional mean imputation introduces bias.
### Observed Data Feature Importance Metrics
Next, we demonstrate that mean imputation (or a broader class of imputation methods) does not introduce bias for observed data feature importance metrics $\theta^{OD}$. Since $\theta^{OD}$ is defined as a function of the classifier $f_{cl}$, the reported feature importance metric reflects classifier-specific importance rather than a general global measure.
A commonly used class of classifiers, referred to as impute-then-regress classifiers, first impute the missing values and subsequently classify. If the classifier is sufficiently flexible and the dataset is large enough, the choice of imputation method becomes inconsequential, as no new information is introduced. Thus, any classifier that maps $X_s$ and $R_s$ to $Y$ can be used, including those employing mean imputation, without inducing bias.
However, this conclusion holds only if imputation is performed within the classifier itself using only its input features. If conditional mean imputation is applied to the entire dataset before choosing the subset for the classifier, bias arises. Let the imputed features be: $X'_i =
X_i$ if $R_i = 1$ and $X'_i = \mathbb{E}[X_i| X_o, R = \vec{1}]$ if $R_i = 0$.
The resulting estimator contains terms $f_{cl,s}(X^\prime_s, R_s)$ and is thus no longer a function of $X_{s}$, and $R_{s}$, but of the whole $X_o$. Consequently, applying conditional mean imputation before classification results in biased estimates.
## Theoretical Claims:
We added the following to complete the proof for the positivity assumption:
"In settings with $ \pi_j \in (0,1)$, we find
$ \pi_{j,\delta_j} = \frac{\delta_j \cdot \pi_j}{\delta_j \cdot \pi_j + 1 - \pi_j} > 0
$
since $\delta_j \cdot \pi_j$ > 0."
## Other Strengths and Weaknesses:
The reviewer asked for clarification on FMIG’s practical use and the value of ‘descriptive’ insights. We have updated the text to avoid the ambiguous term ‘descriptive’. FMIG will inform us about what features should be measured more frequently, but doesn’t suggest specific policy interventions.
## Other Comments or Suggestions:
We've switched experiments 3 and 4 according to the reviewer's request. We also included $R$ in the full data classifier for clarity. We've also clarified that we mean $R_{-0} \equiv \lbrace \rbrace$.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed the concerns I raised. I encourage the authors to include an exposition about bias w.r.t. to the choice of imputation in the Appendix (or the main text if space permits). I have raised my score from 3 to 4 accordingly. | null | null | null | null | null | null | null | null |
EmoGrowth: Incremental Multi-label Emotion Decoding with Augmented Emotional Relation Graph | Accept (poster) | Summary: The author proposes an Augmented Emotional Semantic Learning (AESL) framework, which incorporates an Emotion Relation Graph (ERG) to enhance emotion classification. To address the issue of missing partial labels in past data, a reliable soft label generation method is introduced. Additionally, a Relation-based Knowledge Distillation (RKD) approach is proposed to mitigate the impact of missing labels in future data. Furthermore, this work pioneers the application of Multi-Label Class-Incremental Learning (MLCIL) to real-world emotion classification tasks. The effectiveness of the proposed framework is demonstrated through various incremental learning protocols on three different datasets.
## update after rebuttal
I would like to keep my rating.
Claims And Evidence: The related work and ablation experiments provide theoretical support for the various components of this complex model and demonstrate their effectiveness in practice. However, the overall presentation lacks clarity and rigor.
Methods And Evaluation Criteria: Yes. The proposed method is relatively novel, and the experiments are fairly comprehensive.
Theoretical Claims: The content description and organization of the paper are somewhat disordered. For example, the definition of ERG in Section 3.4 should be introduced before Section 3.3.
Experimental Designs Or Analyses: The compared works have not publicly reported experiments on these three datasets. Therefore, it is unclear whether the experimental results under the customized protocol adhere to the optimal settings of the respective baseline methods.
Supplementary Material: Yes, I have reviewed the supplementary experimental results section.
Relation To Broader Scientific Literature: Research in the field of Multi-Label Class Incremental Emotion Decoding is relatively scarce, and this paper appears to be among the early studies to focus on this area.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1. The paper introduces a novel and dynamic Emotion Relation Graph (ERG) that utilizes cross-attention to capture correlations between new and old classes. A label propagation algorithm is used to iteratively refine soft labels, while a Graph Autoencoder (GAE) learns semantic embeddings of emotion labels. Furthermore, high-dimensional alignment is performed within the Valence-Arousal feature space. These combined methods effectively mitigate the problem of catastrophic forgetting.
2. The proposed framework is thoroughly evaluated on three datasets, with extensive ablation experiments and visualization provided.
Weaknesses:
1. The framework presented is relatively complex, with significant interdependence between its modules. However, the explanations of the associated figures are somewhat vague, and there is insufficient discussion of their relevance. Is the ERG specifically constructed for this task accurate?
2. The overall content presentation and sequence of the paper lack clarity. For example, the definition of ERG should be introduced prior to Section 3.3.
3. The compared works have not publicly conducted experiments on the three datasets used in this study. Therefore, it is unclear whether the experimental results align with their optimal configurations.
Other Comments Or Suggestions: No
Questions For Authors: 1. I would like to understand the practical significance of the Multi-Label Class Incremental Emotion Decoding field. Many existing emotion-related methods are capable of integrating different tasks into a single input for large models or other frameworks.
2. Please refer to the weaknesses above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **1.I would like to understand the practical significance of the Multi-Label Class Incremental Emotion Decoding field. Many existing emotion-related methods are capable of integrating different tasks into a single input for large models or other frameworks.**
(1) In real world Brain-Computer Interfaces, practical applications often require dynamic adaptation to new emotion categories over time. For example:
• In clinical BCIs for depression monitoring, initial models may focus on basic emotions (e.g., happy/sad), but later need to incorporate nuanced states (e.g., anhedonia, emotional blunting) as therapy progresses.
• Consumer-grade BCIs (e.g., gaming or education) may start with coarse emotion labels but require incremental addition of context-specific labels (e.g., "frustration during learning" or "flow state").
(2) Although large language models (LLMs) demonstrate strong performance in multi-task integration, many studies have shown that their continual learning capabilities remain limited and often suffer from catastrophic forgetting[1][2]. Therefore, investigating incremental learning remains crucial, particularly for specialized models in affective computing.
**2.The explanations of the associated figures are somewhat vague, and there is insufficient discussion of their relevance. Is the ERG specifically constructed for this task accurate?**
We sincerely apologize for any confusion caused. To clarify:
• Figure 2(a) presents the overall model architecture
• Figure 2(b) provides detailed illustration of the AEG-D component marked in Figure 2(a)
• Figure 3 specifically shows the knowledge distillation scheme introduced to mitigate catastrophic forgetting caused by future label absence, which contributes additional loss functions for better training of the Figure 2(a) framework.
We will enhance the explanation of these relationships in the final version.
For ERG, as evidenced in Figure 5, the learned emotion embeddings maintain meaningful topological structures in the semantic space, which illustrates the accuracy of the ERG.
**3.The overall content presentation and sequence of the paper lack clarity. For example, the definition of ERG should be introduced prior to Section 3.3.**
We appreciate the reviewer's suggestion and will adjust the manuscript to present ERG prior to Section 3.3 for better logical flow in the final version.
**4.The compared works have not publicly conducted experiments on the three datasets used in this study. Therefore, it is unclear whether the experimental results align with their optimal configurations.**
All compared methods maintain their original model architectures as reported in their respective papers. For feature extraction, a critical component in emotion decoding, we adopted the most widely-used approaches for these three datasets, as established in prior literature[3][4]. For parameter optimization, we reserved a separate validation set and carefully selected optimal configurations for fair comparison.
**References:**
[1] Chen W, Zhou Y, Du N, et al. Lifelong language pretraining with distribution-specialized experts[C]//International Conference on Machine Learning. PMLR, 2023: 5383-5395.
[2] Hu H, Sener O, Sha F, et al. Drinking from a firehose: Continual learning with web-scale natural language[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 45(5): 5684-5696.
[3] Fu K, Du C, Wang S, et al. Multi-view multi-label fine-grained emotion decoding from human brain activity[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022.
[4] Horikawa T, Cowen A S, Keltner D, et al. The neural representation of visually evoked emotion is high-dimensional, categorical, and distributed across transmodal brain regions[J]. Iscience, 2020, 23(5).
[5] Cowen A S, Fang X, Sauter D, et al. What music makes us feel: At least 13 dimensions organize subjective experiences associated with music across different cultures[J]. Proceedings of the National Academy of Sciences, 2020, 117(4): 1924-1934. | Summary: This paper introduces multi-label fine-grained class incremental emotion decoding, which aims to develop models capable of incrementally learning new emotion categories while maintaining the ability to recognize multiple concurrent emotions. It proposes an Augmented Emotional Semantics Learning (AESL) framework to address two critical challenges: past- and future-missing partial label problems. AESL incorporates an augmented Emotional Relation Graph (ERG) for reliable soft label generation and affective dimension-based knowledge distillation for future-aware feature learning. Experiments demonstrate the effectiveness of the proposed method. The main contributions of this paper are:
- It introduces the multi-label class incremental emotion decoding.
- It develops an innovative augmented emotional semantics learning framework.
## update after rebuttal
The authors' rebuttal addresses most of my concerns. I would like to keep my rating and weakly support the acceptance of the paper.
Claims And Evidence: The claims are supported by evidence. The proposed method's superiority is evident in its consistent outperformance of existing methods.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the studied problem. The proposed AESL is designed to handle the specific challenges of multi-label class incremental emotion decoding, and the evaluation metrics are appropriate for assessing the performance of the proposed method.
Theoretical Claims: Theoretical statements are not involved in this paper.
Experimental Designs Or Analyses: The experimental designs and analyses are overall reasonable. However, the parameter sensitivity analyses are missing. It is not clear how the regularization term $\lambda_1$ affect the final performance.
Supplementary Material: I have reviewed the appendix.
Relation To Broader Scientific Literature: The paper's contributions are relevant to the broader scientific literature in affective computing and machine learning. It builds on existing research in class incremental learning and multi-label classification, advancing the field by addressing specific challenges in emotion decoding.
Essential References Not Discussed: There are no essential references that are missing from the paper.
Other Strengths And Weaknesses: ### Strengths
- The problem studied in this paper is interesting.
- This paper is well written and in good sharp, which is easy to follow.
- The experimental results are somehow promising.
### Weaknesses
- This paper introduces a novel research problem, multi-label class incremental emotion decoding. However, the unique challenges of this problem compared to similar problems are not explicitly pointed out, and the significance of this problem also needs to be further elaborated.
- From Table 4, one can observe that the performance of "w/o ESL&LD+AD" is very close to AESL and better than other cases. The authors should analyze the reason for this phenomenon.
- It is not clear how the hyperparameter $\lambda_1$ affects the final performance.
Other Comments Or Suggestions: Providing more details on computational requirements and efficiency can further strengthen the paper. Moreover, discussing the limitations of the proposed method and potential future research directions can better facilitate the reader's understanding.
Questions For Authors: 1. What are the unique challenges of multi-label class incremental emotion decoding compared to similar problems?
2. From Table 4, one can observe that the performance of "w/o ESL&LD+AD" is very close to AESL and better than other cases. The authors should analyze the reason for this phenomenon.
3. How do the hyperparameter $\lambda_1$ affect the final performance?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **1.What are the unique challenges of multi-label class incremental emotion decoding compared to similar problems?**
(1) Unlike traditional single-label class incremental learning, multi-label class incremental faces unique challenges in addressing catastrophic forgetting from past- and future-missing partial label problems. Specifically, while a single sample may correspond to multiple emotion categories, only a specific subset is available during each task phase. For past labels, if the model fails to reconstruct them, all previous emotion categories would be incorrectly treated as negative samples, significantly degrading model performance. For future labels, the model requires the incorporation of domain knowledge to develop certain reasoning capabilities about unknown categories.
(2) Multi-label class-incremental emotion decoding deals with highly fine-grained emotional categories. In typical scenarios, human emotional variations are remarkably subtle, causing samples from different categories to be distributed in closer within the feature space. Consequently, decoding such fine-grained emotional categories presents significant challenges.
**2.From Table 4, one can observe that the performance of "w/o ESL&LD+AD" is very close to AESL and better than other cases. The authors should analyze the reason for this phenomenon.**
“w/o ESL&LD+AD” means that directly integrate the sample-wise affective dimension into category-wise emotion embeddings. In this setup, we aim to explore the rationale of incorporating domain knowledge on affective dimension to mitigate the issue of future missing partial label problem. In fact, the affect dimension space can represent an arbitrary number of emotion categories, so directly using affective dimension features for class-incremental emotion decoding would yield excellent results. However, in practical applications, this approach requires pre-establishing a mapping between emotion categories and affective dimensions, rather than constituting an end-to-end model.
**3.How do the hyperparameter λ1 affect the final performance?**
In fact, in the loss function shown in Equation (15), the relative magnitudes of λ₁, λ₂, and λ₃ are quite important. In the Brain27 dataset with B0-I9 setting, we fixed λ₂ = 0.5 and λ₃ = 2, and observed the impact of varying λ₁ on model performance as follows:
| $\lambda_1$ | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 |
|-----------------|---------|---------|--------|-------|--------|----------|---------|
| **mAP (%)** | 41.4 | 41.9 |42.7 | 42.9 | 43.1 | 42.6 | 42.0 |
We can observe that an excessively small λ₁ leads the model to distill more information from the affective dimension space. Considering the heterogeneity of the feature space, this makes the model harder to train and results in lower accuracy. On the other hand, an overly large λ₁ makes the model more susceptible to the impact of future-missing partial label problem, thereby degrading its performance. | Summary: The paper introduces multi-label, fine-grained class incremental emotion decoding AESL to adapt to the scenarios where novel emotion categories continuously emerge. To solve the critical past-missing partial label problem, AESL introduces an augmented Emotional Relation Graph (ERG) module, using graph-based label disambiguation to generate reliable soft labels. AESL enhances ERG by integrating historical ERG with new data, preserving emotional label correlations. Moreover, a relation-based knowledge distillation framework is proposed to align model features with the affective dimension space. The emotional semantics learning module utilizes ERG to design a graph autoencoder and learns emotion embeddings to facilitate semantic-specific feature decoupling. Comprehensive evaluations across three datasets (Brain27, Video27, and Audio28) prove the effectiveness of the proposed AESL.
## update after rebuttal
The rebuttal addresses several concerns, including category order on the original tasks, teacher models, and CSC comparison. However, it falls short in providing a discussion/experiments on varying granularity level of tasks (2-cls sentiment vs 7-cls basic emotion vs 28-cls emotion), a motivation with evidence to apply class incremental learning for emotion analysis, and a literature review & SOTA comparison with recent works. The aforementioned lack limits the robustness of their claims. The explanation of motivation also lacks depth, especially regarding its significance and characteristics.
Given these unresolved issues, I recommend a decision leaning towards a weak reject, as the core contributions are promising but not fully substantiated.
Claims And Evidence: The motivation of the task is not fully explained. The authors illustrate the necessity of multiple emotions using quotes and references. However, the argument that “novel emotion categories continuously emerge” has not been proven. Unlike the subject classification task, which has a large number of categories, the categories of emotions seem to be relatively few and fixed, such as the 27 and 28 categories in the experimental dataset of the paper. The class increment task will appear unnecessary.
Methods And Evaluation Criteria: yes
Theoretical Claims: yes
Experimental Designs Or Analyses: Comparative experiments are insufficient. On the one hand, among the compared methods, the latest SLCIL method is only published in 2019. More importantly, some of the SOTA MLCIL methods in 2024 have not been compared, such as CSC[1].
[1] Du K, Zhou Y, Lyu F, et al. Confidence self-calibration for multi-label class-incremental learning[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2024: 234-252.
Supplementary Material: Yes, the experiment settings and implementation details
Relation To Broader Scientific Literature: no
Essential References Not Discussed: no
Other Strengths And Weaknesses: Pro
- The paper is almost well-written and well-organized.
- The proposed method can be applied to datasets of different modalities.
Con
- The motivation of the task is not fully explained. The authors illustrate the necessity of multiple emotions using quotes and references. However, the argument that “novel emotion categories continuously emerge” has not been proven. Unlike the subject classification tas,k which has a large number of categories, the categories of emotions seem to be relatively few and fixed, such as the 27 and 28 categories in the experimental dataset of the paper. The class increment task will appear unnecessary.
- Discussion on category order. Unlike general classification tasks, the relationship between different emotion categories is very different. Therefore, different category orders in the incremental process will have an impact on performance. For example, if the old task is positive, the ability to learn negative and positive in the new task is different. Please discuss. In addition, a more reasonable performance report should be the average and standard deviation of the results under different orders.
- Stability on relation-based knowledge distillation. This module distills two teacher models at the same time. What is the relationship between the two teacher models and whether they are complementary? Related discussions are recommended. In addition, distillation with old knowledge can easily reduce the plasticity of the model and lacks the ablation of λ1.
- Comparative experiments are insufficient. On the one hand, among the compared methods, the latest SLCIL method is only published in 2019. More importantly, some of the SOTA MLCIL methods in 2024 have not been compared, such as CSC[1].
Other Comments Or Suggestions: no
Questions For Authors: see weakness
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **1.The motivation of the task.**
The practical significance of multi-label incremental emotion decoding can be summarized as two folds.
(1) In real world Brain-Computer Interfaces, practical applications often require dynamic adaptation to new emotion categories over time. For example, in clinical BCIs for depression monitoring, initial models may focus on basic emotions (happy/sad), but later need to incorporate nuanced states (anhedonia, emotional blunting) as therapy progresses.
(2) The rapid advancement of psychology has led to increasingly fine-grained discoveries in emotion categories. Recent study [1] has empirically identified up to 80 distinct emotional states, far exceeding traditional coarse taxonomies. Besides, emotion categories are not static but evolve with interdisciplinary findings. For example, "Bittersweetness" and "awe" were later additions to emotion frameworks.
**2.Discussion on category order.**
If the model initially learns exclusively positive emotions followed by negative emotions in subsequent tasks, the excessive inter-task categorical divergence may exacerbate catastrophic forgetting.
We conducted 10 randomized shuffles of the category sequence and computed the mean and standard deviation. The experimental results are presented below:
|Method | Brain27 B0-I9 | Brain27 B0-I3 | Brain27 B15-I3| Brain27 B15-I2 |
|-----|-------------------|------------------|---------------------|--------------------|
|AESL|44.8 $\pm$ 0.3|43.9 $\pm$ 0.5|41.7 $\pm$ 0.5|39.9 $\pm$ 0.5|
|Method | Video27 B0-I9 | Video27 B0-I3 | Video27 B15-I3| Video27 B15-I2 |
|-----|-------------------|------------------|---------------------|--------------------|
|AESL|45.1 $\pm$ 0.5|47.3 $\pm$ 0.4|41.6 $\pm$ 0.3|39.5 $\pm$ 0.7|
|Method | Audio28 B0-I7 | Audio28 B0-I4 | Audio26 B16-I3| Audio28 B16-I2 |
|-----|-------------------|------------------|---------------------|--------------------|
|AESL|49.4 $\pm$ 0.6|48.5 $\pm$ 0.3|47.9 $\pm$ 0.3|45.2 $\pm$ 0.2|
**3.The relationship between the two teacher models.**
The two teacher models exhibit complementary characteristics. The old-model teacher serves to mitigate the forgetting of previously learned emotion categories, while the affective-dimension teacher addresses catastrophic forgetting caused by future-missing partial label problem. Besides, relying solely on distillation from the affective dimension space would lead to suboptimal model performance due to the heterogeneity of feature representations, resulting in training difficulties.
**4.Distillation with old knowledge can easily reduce the plasticity of the model and lacks the ablation of λ1.**
Knowledge distillation from the old model is a widely used method to mitigate catastrophic forgetting, where the balance between plasticity and stability is primarily controlled by the regularization parameter—specifically, λ₁ in Eq(15). In the Brain27 dataset with B0-I9 setting, we fixed λ₂ = 0.5 and λ₃ = 2, and observed the impact of varying λ₁ on model performance as follows:
| $\lambda_1$ | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 |
|-----------------|---------|---------|--------|-------|--------|----------|---------|
| **mAP (%)** | 41.4 | 41.9 |42.7 | 42.9 | 43.1 | 42.6 | 42.0 |
We can observe that an excessively small λ₁ leads the model to distill more information from the affective dimension space. Considering the heterogeneity of the feature space, this makes the model harder to train and results in lower accuracy. On the other hand, an overly large λ₁ makes the model more susceptible to the impact of future-missing partial label problem, thereby degrading its performance.
**5.Some of the SOTA MLCIL methods in 2024 have not been compared, such as CSC.**
We have conducted comparative experiments with the CSC method, and the results will be incorporated into the final version of the paper. The experimental results (Avg. Acc) are as follows:
|Method | Brain27 B0-I9 | Brain27 B0-I3 | Brain27 B15-I3| Brain27 B15-I2 |
|-----|-------------------|------------------|---------------------|--------------------|
|CSC|42.1 $\pm$ 0.2|43.0 $\pm$ 0.3|41.1 $\pm$ 0.6|39.4 $\pm$ 0.4|
|AESL|44.8 $\pm$ 0.3|43.9 $\pm$ 0.5|41.7 $\pm$ 0.5|39.9 $\pm$ 0.5|
|Method | Video27 B0-I9 | Video27 B0-I3 | Video27 B15-I3| Video27 B15-I2|
|-----|-------------------|------------------|---------------------|--------------------|
|CSC|44.1 $\pm$ 0.7|46.8 $\pm$ 0.2|41.6 $\pm$ 0.2|38.4 $\pm$ 0.6|
|AESL|45.1 $\pm$ 0.5|47.3 $\pm$ 0.4|41.6 $\pm$ 0.3|39.5 $\pm$ 0.7|
|Method | Audio28 B0-I7 | Audio28 B0-I4 | Audio26 B16-I3| Audio28 B16-I2|
|-----|-------------------|------------------|---------------------|--------------------|
|CSC|47.8 $\pm$ 0.3|48.0 $\pm$ 0.7|46.9 $\pm$ 0.5|45.2 $\pm$ 0.3|
|AESL|49.4 $\pm$ 0.6|48.5 $\pm$ 0.3|47.9 $\pm$ 0.3|45.2 $\pm$ 0.2|
**References:**
[1] Koide-Majima N et al. Distinct dimensions of emotion in the human brain and their representation on the cortical surface. NeuroImage, 2020 | Summary: The paper proposes **EmoGrowth**, a framework addressing **multi-label fine-grained class incremental emotion decoding**. This paradigm enables models to learn **new emotion categories incrementally** while preserving the ability to recognize **multiple concurrent emotions** in dynamic real-world environments. The **Augmented Emotional Semantics Learning (AESL)** framework introduces three innovations:
- **Emotional Relation Graph (ERG)** for label disambiguation and capturing inter-class dependencies.
- **Affective dimension-based knowledge distillation** to align features with continuous emotion representations.
- **Semantic-specific feature decoupling** guided by emotion embeddings.
Experiments on **brain activity (Brain27)**, **video (Video27)**, and **audio (Audio28)** datasets demonstrate superior performance in decoding **28 fine-grained emotions**, outperforming existing methods while mitigating catastrophic forgetting.
---
### **Key Contributions**
1. **Technical Innovation**:
- **Augmented ERG**: Dynamically integrates historical and new task emotional relationships using graph-based label disambiguation. This resolves the *past-missing label problem* by generating reliable soft labels and preserving label correlations across tasks.
- **Relation-based Knowledge Distillation (RKD)**: Aligns model features with **affective dimension spaces** (e.g., valence-arousal), leveraging domain knowledge to address the *future-missing label problem*.
- **Emotional Semantics Learning**: A graph autoencoder learns emotion embeddings for semantic-guided feature decoupling, enhancing multi-label recognition.
2. **Empirical Validation**:
- Evaluated under multiple incremental protocols (e.g., B0-I7, B16-I3), AESL achieves **state-of-the-art results**, surpassing SLCIL methods (e.g., LwF, ER) and MLCIL baselines (e.g., AGCN, OCDM) by large margins (e.g., **15–20% higher mAP** on Audio28).
- Visualization (t-SNE, ERG adjacency matrices) validates emotion embeddings’ semantic topology and label relationship reconstruction.
---
Claims And Evidence: ### **Weaknesses**
1. **Limited Exploration of Task Dynamics**:
- The impact of **emotion category ordering** (e.g., learning "Adoration" before "Awe") on performance remains unexamined, which could affect real-world deployment.
- **Task granularity**: The effect of varying emotion categories per task (e.g., 2 vs. 7 categories per incremental task) is not analyzed.
2. **Practical Constraints**:
- While affective dimensions (valence/arousal) are utilized, their annotation process for new tasks is not discussed—raising questions about scalability.
- No evaluation on **cross-subject** or **cross-domain adaptation** (e.g., transferring from brain signals to audio), limiting applicability to heterogeneous data scenarios.
3. **Theoretical Gaps**:
- The interplay between emotion categories and affective dimensions is underexplored. For instance, how explicit alignment (via RKD) improves incremental learning beyond empirical results lacks theoretical justification.
---
### **Suggestions**
1. **Expand Task Dynamics Analysis**:
- Conduct ablation studies on emotion order and task size variability.
- Explore **curriculum learning** strategies to optimize task sequences.
2. **Enhance Generalizability**:
- Test cross-domain incremental learning (e.g., Brain27 → Audio28).
- Investigate unsupervised/semi-supervised affective dimension estimation for new tasks.
3. **Deepen Theoretical Foundation**:
- Formalize guarantees for knowledge transfer between affective spaces and emotion categories.
Methods And Evaluation Criteria: SEE Claims And Evidence
Theoretical Claims: SEE Claims And Evidence
Experimental Designs Or Analyses: SEE Claims And Evidence
Supplementary Material: ALL
Relation To Broader Scientific Literature: SEE Claims And Evidence
Essential References Not Discussed: NO
Other Strengths And Weaknesses: SEE Claims And Evidence
Other Comments Or Suggestions: SEE Claims And Evidence
Questions For Authors: SEE Claims And Evidence
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **1.Conduct ablation studies on emotion order and task size variability.**
For the emotion order, we conducted 10 randomized shuffles of the category sequence. The experimental results are presented below:
|Method | Brain27 B0-I9 | Brain27 B0-I3 | Brain27 B15-I3| Brain27 B15-I2 |
|-----|-------------------|------------------|---------------------|--------------------|
|AESL|44.8 $\pm$ 0.3|43.9 $\pm$ 0.5|41.7 $\pm$ 0.5|39.9 $\pm$ 0.5 |
|Method | Video27 B0-I9 | Video27 B0-I3 | Video27 B15-I3| Video27 B15-I2 |
|-----|-------------------|------------------|---------------------|--------------------|
|AESL|45.1 $\pm$ 0.5|47.3 $\pm$ 0.4|41.6 $\pm$ 0.3|39.5 $\pm$ 0.7|
|Method | Audio28 B0-I7 | Audio28 B0-I4 | Audio26 B16-I3| Audio28 B16-I2 |
|-----|-------------------|------------------|---------------------|--------------------|
|AESL|49.4 $\pm$ 0.6|48.5 $\pm$ 0.3|47.9 $\pm$ 0.3|45.2 $\pm$ 0.2 |
For the task size variability, we have conducted experiments for different emotion categories per task. The experimental results are presented below:
|Method | Brain27 B0-I9 | Brain27 B0-I7 | Brain27 B0-I5| Brain27 B0-I3 |Brain27 B0-I1|
|-----|-------------------|------------------|---------------------|--------------------|-----|
|AESL|44.8 $\pm$ 0.3|44.6 $\pm$ 0.4|43.9 $\pm$ 0.3|43.9 $\pm$ 0.5 |43.0$\pm$ 0.9|
|Method | Video27 B0-I9 | Video27 B0-I7 | Video27 B0-I5| Video27 B0-I3 |Video27 B0-I1|
|-----|-------------------|------------------|---------------------|--------------------|-----|
|AESL|45.1 $\pm$ 0.5|46.8 $\pm$ 0.2|46.4 $\pm$ 0.2|47.3 $\pm$ 0.4 |44.8 $\pm$ 0.5|
|Method | Audio28 B0-I7 | Audio28 B0-I5 | Audio28 B0-I4| Audio28 B0-I2 |Audio28 B0-I1|
|-----|-------------------|------------------|---------------------|--------------------|------|
|AESL|49.4 $\pm$ 0.6|49.0 $\pm$ 0.2|48.5 $\pm$ 0.3|47.4 $\pm$ 0.7 |46.1$\pm$ 0.9|
In most cases, as the task size decreases and the number of learned tasks increases, the model performance exhibits an overall declining trend.
**2.Explore curriculum learning strategies to optimize task sequences.**
We appreciate the valuable suggestion regarding curriculum learning for task sequencing. However, in practical deployment scenarios, the emotion categories appearing at each incremental stage are typically unforeseen in advance. This inherent uncertainty makes our current approach of randomized task shuffling with repeated evaluations (as shown above) a methodologically sound solution, ensuring robustness to arbitrary category arrival orders.
**3.Test cross-domain incremental learning (e.g., Brain27 → Audio28).**
We have conducted cross-domain incremental learning experiments from Brain27 to Audio28. The results on the transferred Audio28 dataset are compared with those obtained by training directly on Audio28 from scratch as follows:
|Method | Audio28 B0-I7 | Audio28 B0-I4 | Audio26 B16-I3| Audio28 B16-I2 |
|-----|-------------------|------------------|---------------------|--------------------|
|Brain→Audio|49.6 $\pm$ 0.1|48.4 $\pm$ 0.2|47.4 $\pm$ 0.5|45.5 $\pm$ 0.3 |
|Audio|49.4 $\pm$ 0.6|48.5 $\pm$ 0.3|47.9 $\pm$ 0.3|45.2 $\pm$ 0.2|
The experimental results demonstrate that transfer learning can enhance model performance compared to cold-start training in certain scenarios. However, due to the significant feature disparity, the cross-domain transfer may conversely degrade performance in some cases.
**4.Investigate unsupervised/semi-supervised affective dimension estimation for new tasks.**
In our work, we assume the affective dimension labels of samples are manually annotated ratings. Considering the prohibitive cost of manually rating each new sample in practical applications, we explored pre-training an affective dimension scoring model with additional held-out training set. This allows us to automatically extract affective dimension features for incoming samples using the pre-trained model. Comparative experimental results between knowledge distillation using model-predicted affective dimension labels versus human-annotated labels are presented below:
|Method | Brain27 B0-I9 | Brain27 B0-I3 | Brain27 B15-I3| Brain27 B15-I2 |
|-----|-------------------|------------------|---------------------|--------------------|
|Model-predicted|44.5 $\pm$ 0.2|43.8 $\pm$ 0.5|41.7 $\pm$ 0.4|39.7 $\pm$ 0.3 |
|Human-annotated |44.8 $\pm$ 0.3|43.9 $\pm$ 0.5|41.7 $\pm$ 0.5|39.9 $\pm$ 0.5 |
The results demonstrate that utilizing model-predicted affective dimension labels can achieve comparable performance to using human-annotated affective dimension labels.
**5.Formalize guarantees for knowledge transfer between affective spaces and emotion categories.**
We sincerely appreciate this foundational question. In our upcoming work, we will establish error bounds for cross-space knowledge transfer between emotion category manifolds and dimensional affective spaces by deriving Lipschitz continuity conditions and optimal transport-based projection guarantees. | null | null | null | null | null | null |
It's Not Just a Phase: On Investigating Phase Transitions in Deep Learning-based Side-channel Analysis | Reject | Summary: This paper investigates deep learning-based side-channel analysis and introduces mechanistic interpretability methods to understand how neural networks trained on side-channel models learn. Specifically, the paper transforms black-box evaluation into white-box evaluation through reverse engineering, revealing the features learned by the network during phase transitions. The results on CHES_CTF, ESHARD, and ASCAD demonstrate the effectiveness of investigating the structures learned during phase transitions and provide evidence for the weak universality of circuits in side-channel models.
Claims And Evidence: Yes. The claims made in the submission are well-supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes. The Logit analysis method is used to identify key features and patterns in the model predictions. The activation analysis method is capable of finding physical leakage information in the principal components. The activation patching method is used to verify whether the features have causal relationships and is employed for reverse engineering the masks.
Theoretical Claims: Yes. This paper does not involve theoretical statements. All hypotheses are based on existing literature, such as phase transition theory.
Experimental Designs Or Analyses: Yes. Experiments on multiple datasets validate the effectiveness of the mechanistic interpretability analysis.
Supplementary Material: No.
Relation To Broader Scientific Literature: Benadjila R, Prouff E, Strullu R, et al. Deep learning for side-channel analysis and introduction to ASCAD database[J]. Journal of Cryptographic Engineering, 2020, 10(2): 163-188.
Perin G, Karayalcin S, Wu L, et al. I know what your layers did: Layer-wise explainability of deep learning side-channel analysis[J]. Cryptology ePrint Archive, 2022.
This paper extends previous work by explaining deep learning side-channel analysis from a different perspective.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. This paper investigates the phase transition phenomenon in Deep Learning-based Side-channel Analysis.
2. The motivation is clear, and the literature review is thorough.
3. The phenomena revealed by the experiments are clear.
Weaknesses:
The paper lacks a discussion on the reasons behind the phase transitions. The main concern is about the role of network layers, the Adam learning rate, and the properties of the dataset itself.
Other Comments Or Suggestions: 1. For writing, it is recommended to adopt a general-to-specific structure, which would improve the clarity of the article. Specifically, start by introducing the overall framework before detailing the functions of individual modules.
2. For the experimental analysis section, it is recommended to supplement the discussion with the impact of the main layers of the neural network, the training strategy, and the properties of the dataset.
Questions For Authors: No.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the review. We are glad the motivations and phenomena we describe are clear.
W1: As mentioned in the paper, the (potential) reasons for learning occurring in discrete phase transitions are initially discussed in [1], offering preliminary insights into the phenomenon. More elaborate theoretical discussions on phase transitions can be found in the literature on Singular Learning Theory [2,3], which we consider out of scope for this work. However, we will add references relevant to these theoretical aspects to address the reviewer's concern.
Other Suggestions:
1: We believe that our current structure already incorporates this principle. The introduction and the subsequent sections on SCA and MI are designed to provide a broad, general overview of the necessary background and concepts. These sections establish the foundation for understanding our approach, which is introduced and detailed in Section 4. We can highlight this further in our paper if it is not immediately apparent.
2: Our work has a section discussing the results and their implications in the SCA domain. We are not quite sure what you mean by `impact of the main layers', but if this concern is similar to the concerns raised by reviewer 6ugN (see weaknesses 2 and 4), please see the corresponding response. The dataset details and training process are provided in Appendices A and B.
[1]: Michaud, E. J., Liu, Z., Girit, U., and Tegmark, M. The quantization model of neural scaling. NeurIPS 2023
[2]: Watanabe, Sumio. Algebraic geometry and statistical learning theory. Vol. 25. Cambridge university press, 2009.
[3]: Wei, Susan, et al. "Deep learning is singular, and that’s good." IEEE Transactions on Neural Networks and Learning Systems 34.12 (2022): 10473-10486.
---
Rebuttal Comment 1.1:
Comment: I double-checked the paper carefully and found that some of the key issues have already been addressed to some extent in the appendix and the rebuttal, such as the training strategy and the types of side channels in the dataset. As Reviewer 6ugN also noted, the role of network architecture—such as CNNs and MLPs—could benefit from further insights. While I still have some concerns in this regard, I believe they do not significantly affect the core contributions of the paper. Therefore, I have decided to raise my score by one point.
---
Reply to Comment 1.1.1:
Comment: Thank you for your careful re-evaluation of our paper and for acknowledging that some of your key concerns regarding the training strategy and side channel types have been addressed in the appendix and rebuttal. | Summary: The paper explores the novel concept of phase transitions within the context of Deep Learning-based Side-channel Analysis (DLSCA). It introduces an approach for mechanistic interpretability, aimed at understanding the detailed mechanisms of how deep learning models adapt and operate during the phase transitions associated with training, specifically targeting the field of side-channel analysis where sensitive data is at risk. The authors investigate these transitions to uncover the specific leakage points that DL models exploit, enhancing the transition from black-box to white-box understanding of model behaviors. This research reveals how networks adjust their internal representations and decision-making processes to improve attack performance, thereby offering insights into both enhancing attack strategies and developing robust countermeasures.
Claims And Evidence: See strengths and weaknesses.
Methods And Evaluation Criteria: See strengths and weaknesses.
Theoretical Claims: See strengths and weaknesses.
Experimental Designs Or Analyses: See strengths and weaknesses.
Supplementary Material: See strengths and weaknesses.
Relation To Broader Scientific Literature: See strengths and weaknesses.
Essential References Not Discussed: See strengths and weaknesses.
Other Strengths And Weaknesses: Strengths:
1. Focuses on a lesser-studied aspect of DLSCA, providing fresh insights into the dynamic changes in model behavior during training, known as phase transitions. Offers deep insights into the internal workings of DL models, particularly how they handle and process side-channel data during phase transitions. Directly applies findings to improve methods for attacking cryptographic devices, highlighting practical applications in security.
2. Utilizes sophisticated techniques such as mechanistic interpretability to analyze the models, providing a higher resolution of understanding. Enhances the ability to perform white-box analyses of side-channel attacks, which is crucial for developing effective security measures.
3. Employs a comprehensive set of experiments that validate the theoretical findings, strengthening the claims with empirical evidence. Bridges gaps between deep learning, cryptography, and security analysis, appealing to a broad audience. By understanding how models learn during phase transitions, the research contributes to designing better countermeasures against side-channel attacks. The paper is well-written with detailed analyses that are both deep and accessible, providing clarity on complex concepts.
Weaknesses:
1. The advanced techniques used may be difficult to understand or implement without a deep background in both machine learning and cryptography.
2. The study might be overly tailored to the specific types of neural networks studied, which could limit generalizability. Assumes access to certain model insights that might not be available in more secure or differently configured systems.
3. The methods discussed may require significant computational resources, limiting their applicability in constrained environments. While the paper provides a robust approach to understanding DLSCA, it could benefit from comparing its methods against other possible analytical techniques.
4. The focus on specific types of side-channel datasets might not reflect the full range of scenarios where DLSCA could be applied. It is not clear how well the approaches discussed would scale to larger or more complex datasets and models. The effectiveness of the techniques may rely heavily on the quality and nature of the data used, which can vary significantly in real-world scenarios. There is a risk that the training process might introduce biases, which the phase transition analysis might not fully account for. The paper could provide a more detailed discussion on the conditions or scenarios where the proposed techniques fail to provide insights or improvements.
Other Comments Or Suggestions: See strengths and weaknesses.
Questions For Authors: See strengths and weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the positive review. We are glad you found the analyses both deep and accessible.
W1: While this might be true, we hope this initial work will simplify future analyses by enumerating some (potentially) common structures. Additionally, automating some of these analyses could be possible in future work to streamline the use of interpretability analysis. We can expand the discussion on this in Section 7.
W2: While we only focus on MLPs in this work, the results on CNNs in [1] imply that these methods would translate there. Additionally, we see that MLPs from [2] and transformers from [3] learn the same algorithms for the same tasks, indicating consistency across architectures. We will add some more discussion on this to the camera-ready version.
W3: The computational load for applying the methods in the paper is very low. The main computational load in this work is training the initial model, which was already done before applying our MI analysis and is the core part of conducting DLSCA. Since models in SCA are typically small compared to those from NLP or CV domains, training models for the considered targets generally takes under an hour on an RTX 4080 GPU.
Alternative analytical techniques are either unsuitable or incomparable in this context. Many MI methods utilize input interventions that are impossible for the DLSCA analysis (as discussed in the paper), and the input visualization techniques previously used in SCA do not target model internals. Lastly, [1] requires access to masks, which makes it not directly comparable.
W4: Indeed, analyzing more difficult targets will be more challenging. However, we want to emphasize that our dataset selection already includes a range of complexity, with ESHARD being significantly noisier than CHES_CTF. However, we recognize this as a critical area for future investigation.
Our analysis objective here is to understand what the model has learned rather than what it should have learned. The introduction of biases during training is indeed a concern. If a model fails to capture certain leakage during training, our analysis will not reveal them. However, classical SCA approaches often rely on significant assumptions about the targets, introducing their own biases, while (black-box) DLSCA potentially mitigates having to make these assumptions. We will expand the discussion in Section 7.
[1]: Perin, G., Karayalcin, S., Wu, L., and Picek, S. I know what your layers did: Layer-wise explainability of deep learning side-channel analysis. Cryptology ePrint Archive, Paper 2022/108
[2]: Chughtai, B., Chan, L., and Nanda, N. A toy model of universality: Reverse engineering how networks learn group operations. In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., and Scarlett, J. (eds.), International Conference on Machine Learning, ICML 2023
[3]: Nanda, N., Chan, L., Lieberum, T., Smith, J., and Steinhardt, J. Progress measures for grokking via mechanistic interpretability. In The Eleventh International Conference on Learning Representations, ICLR 2023
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's rebuttal. The author's response did not solve all my questions well, so I kept my previous rating. | Summary: The paper applies mechanistic interpretability techniques to side-channel analysis, which is used to extract secret keys from protected devices by monitoring physical factors like power consumption. The authors investigate model behavior during phase transitions (sudden jumps in accuracy) by analyzing activations in MLP models. By projecting layer activations to 2D space using PCA and color-coding by target values, they identify interpretable patterns in learned representations. They validate these findings by altering the identified components and observing predictable changes in model output, demonstrating they’ve identified the causal mechanisms behind the model’s predictions.
## update after rebuttal
I have updated my score to 3, as the issues I previously mentioned have been resolved. However, the paper could still benefit from further clarifying its claim regarding tracing learned features back to specific input characteristics—particularly, what is meant by "input characteristics." - does this refer to internal device values like HW, or to model inputs like device temperature.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: They are all sound.
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper connects two fields: mechanistic interpretability and side-channel analysis security research. In mechanistic interpretability, their approach of investigating activations based on output classes offers a novel technique for understanding model behavior, especially when direct input interventions aren’t possible.
Essential References Not Discussed: Not found.
Other Strengths And Weaknesses: ## Strengths
1. The research experimented on multiple datasets and attack settings, showing robustness of their approach.
2. They successfully apply interpretability techniques to a real-world security problem. It is a potentially valuable test bed for future mechanistic interpretability research.
3. This method uniquely handles situations where direct input interventions aren’t possible.
## Weaknesses
1. The paper doesn’t fully explore how their findings could be used to improve device security, despite this being one of their stated motivations.
2. There’s no analysis connecting the learned features back to specific input characteristics that cause the leakage.
3. Their reliance on PCA indirectly assumes features are orthogonal, which may not hold true due to feature superposition effects documented in recent research.
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the feedback. We are pleased you think DLSCA might be a valuable testbed for future MI research.
W1: Improving the security of devices requires a good understanding of the devices' vulnerabilities, and this work provides a concrete approach to understanding how and why a particular attack was successful. Additionally, this work offers several insights into the training dynamics in DLSCA, which can aid chip designers in utilizing countermeasures that aim to make these structures more challenging to learn. The recently introduced prime-field masking approach from [1] seems a viable approach. We will add some more discussion on this.
W2: The SNR plots with reverse-engineered masks tie the extracted masks back to specific input points. Furthermore, the learned structures within the model provide insights into how the input values leak (HW for ESHARD/CHES and 2 LSBs for ASCADr). Therefore, we want to emphasize that our analysis does connect learned features to specific input characteristics that cause leakage, but we will highlight these points more in the paper.
W3: The reviewer makes a valid point regarding the potential limitations of relying on PCA. However, we argue that the characteristics of the DLSCA models mitigate this limitation. As discussed in Section 4 (Activation Analysis) and supported by experimental findings using information bottlenecks in [2], DLSCA models typically learn a relatively small number of features. This reduces the likelihood and impact of significant feature superposition.
On the other hand, alternative techniques could be useful and offer potential improvements if there is significant feature superposition, such as sparse autoencoders [3].
[1]: Cassiers, G., Masure, L., Momin, C., Moos, T., & Standaert, F.-X. (2023). Prime-Field Masking in Hardware and its Soundness against Low-Noise SCA Attacks. IACR Transactions on Cryptographic Hardware and Embedded Systems, 2023(2), 482-518. https://doi.org/10.46586/tches.v2023.i2.482-518
[2]: Perin, G., Karayalcin, S., Wu, L., and Picek, S. I know what your layers did: Layer-wise explainability of deep learning side-channel analysis. Cryptology ePrint Archive, Paper 2022/108
[3]: Bricken, et al., "Towards Monosemanticity: Decomposing Language Models With Dictionary Learning", Transformer Circuits Thread, 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. Your answers to W1 and W3 are both convincing.
I believe the second issue—connecting learned features back to specific input characteristics—could be further clarified. Input characteristics can be understood in two distinct ways:
1. Device internal features, such as the Hamming Weight (HW) or 2 LSBs (which I assume this is what you mean).
2. Model inputs, which consist of side-channel traces, such as power consumption or electromagnetic (EM) signals.
---
Reply to Comment 1.1.1:
Comment: Thank you for the quick reply!
On the input characteristics, the two categories you mentioned are indeed broadly what we care about from the SCA side.
1. More precisely, we are concerned with how the values manipulated by the (AES) algorithm running on the device leak (i.e., how the processed values influence the power usage or EM emissions). This is impacted by device internals and algorithm implementation but can also be influenced by the measurement setup and other external factors. In the SCA literature, the way the processed values leak in the measured side-channel information is referred to as the leakage model, such as the mentioned HW and LSB. In security evaluation or attack scenarios, these leakage models are often hypothesized. However, our results indicate that we can deduce this leakage model (or, more precisely, what leakage the model extracts). This is the Hamming weight (HW) for CHES_CTF and ESHARD and the two least significant bits for ASCADr.
2. Considering the side-channel traces (model inputs), we try to understand where those values are manipulated during (AES) computation. A common method for determining where a value is in a trace is to compute the signal-to-noise ratio (SNR) (or correlation) between the values and each point in the trace across many traces. By doing this with the extracted masks, we can show where each of these masks leaks within the trace (see Figure 4 (left)). By looking at this SNR plot, we can then also disambiguate which of the extracted values is the mask $r_m$ and which is the masked Sbox output $Sbox \oplus r_m$, as we can assume the masks have to be loaded before the masked Sbox computation can occur.
We will add a section/paragraph to the discussion to clarify the above. We will also expand on the SNR description for better understanding. | Summary: This paper investigates the feasibility of applying Mechanistic Interpretability (MI) to deep learning-based side-channel analysis (DLSCA) to enhance the interpretability of deep neural networks in security evaluations. The authors explore how neural networks exploit side-channel leakage and identify learned structures during training phase transitions. They also successfully demonstrate that networks extract secret mask values by analyzing model logits, principal components, and activation patching, effectively transitioning the evaluation from black-box to white-box.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: I appreciate this work; while I am not familiar with Mechanistic Interpretability, I could understand how the features of deep learning-based side-channel analysis are learned and the key structures are identified. I only have one conce: what is the uniqueness of the investigation of applying MI to DLSCA? Could that be possible for the proposed method to be extended to other security related tasks, e.g., buffer overflow detections. If so, could the authors briefly explain the possible future direction? If not, could the authors justify why the proposed method is only applicable to side channel analysis?
Theoretical Claims: I checked the analysis approaches in Section 4, including Logit Analysis, Activations Analysis, and Reverse engineering masks with activation patching. These claims look good to me.
Experimental Designs Or Analyses: The experimental designs look good to me in genereal. However, I am a bit confused by what is the type of side channels of the dataset. I checked Appendix A and saw that the dataset contains execution traces of programs. Are these traces related to timing, cache, or even power side channels?
Supplementary Material: Appendix A and Appendix B.
Relation To Broader Scientific Literature: The contribution of this paper is concrete and could be able to enhance the future research on using Deep Learning for SCA.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: ### Pros
- Interesting and impo topics.
- The paper tries to enhance the interpretability of DLSCA.
### Cons
- The types of side channels in the dataset are not clearly introduced.
- Lack discussion to apply this method to other simialr fields.
Other Comments Or Suggestions: N/A.
Questions For Authors: - What is the uniqueness of the investigation of applying MI to DLSCA? Could that be possible for the proposed method to be extended to other security related tasks?
- What are the side channel types of the execution traces used in the dataset?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the feedback. We are glad to hear the core concepts were understandable, even without extensive prior knowledge of MI or DLSCA.
Q1: The principles of MI can indeed be extended to other security-related tasks, but the specific analysis techniques and challenges can vary significantly across domains. Our focus on DLSCA was motivated by the unique challenges it presents for MI, as it involves noisy and complex data, and we cannot perform well-defined input interventions. Moreover, we observe that the number of phase transitions in DLSCA models is small, allowing us to perform a detailed, individual examination of each transition, as the number of features relevant for classification is relatively small (see W3 for reviewer kqw9).
Thus, the specific techniques and the scale of the MI analysis must be adapted to the unique characteristics of the task and will (presumably) require significant expertise in those tasks. This work on DLSCA (and other related work) provides a valuable foundation for exploring these adaptations in future research.
We also note that for security-related tasks, network performance alone is often not sufficient for wide adoption. In tasks like malware detection [1] or fraud detection [2], using classification algorithms (without specific explanations) in production might not be allowed for legal reasons. Similarly, for NNs that attack cryptography (either using DLSCA or more ML-based differential cryptanalysis [3]), a pass/fail condition is only of limited use without explanations. We will add some more discussion on the broader uses of MI in security-related applications.
Q2: Utilized datasets consist of power (CHES_CTF) and electromagnetic emission (ESHARD and ASCAD) measurements from cryptographic executions on embedded devices (e.g., 15,000 points of power use across the first round of AES for CHES_CTF). This information is already stated in Appendix A, but we will mention it in the main text to clarify further.
[1]: Saqib, Mohd, et al. "A Comprehensive Analysis of Explainable AI for Malware Hunting." ACM Computing Surveys 56.12 (2024): 1-40.
[2]: Parkar, Erum, et al. "Comparative study of deep learning explainability and causal ai for fraud detection." International Journal on Smart Sensing and Intelligent Systems 1 (2024).
[3]: Gohr, Aron. "Improving attacks on round-reduced speck32/64 using deep learning." Advances in Cryptology–CRYPTO 2019: 39th Annual International Cryptology | null | null | null | null | null | null |
DPO Meets PPO: Reinforced Token Optimization for RLHF | Accept (spotlight poster) | Summary: This paper develops an RLHF framework with a fine-grained token-wise reward characterization. Specifically, they model RLHF as an MDP, offering a more precise token-wise characterization of the LLM’s generation process. They introduce RTO algorithm, which extracts token-wise reward signals from offline preference data and subsequently performs RL training with respect to the learned token-wise rewards. In practice, they will introduce a practical implementation of RTO, which uses a token-wise reward extraction approach from DPO.
## update after rebuttal: Thanks for the rebuttal. I do not have any further question. I will keep my original rating.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes.
Theoretical Claims: See my comments in "Questions For Authors".
Experimental Designs Or Analyses: The experiment looks good.
Supplementary Material: I checked the additional related work and Remark C.2 (extension to unknown transition).
Relation To Broader Scientific Literature: See my comments in "Questions For Authors".
Essential References Not Discussed: See my comments in "Questions For Authors".
Other Strengths And Weaknesses: See my comments in "Questions For Authors".
Other Comments Or Suggestions: There are typos in Section 5. One of the competitive methods is called token-wise DPO (TDPO). The authors used "DDPO" and "DPPO" in plots and discussions. Are they all different?
Questions For Authors: 1. Assumption 3.1 is not a rigorous assumption. The parameters $A$ and $\xi$ have not been defined. What are their possible ranges of values? The current statement is not an assumption without conditions on $A$ and $\xi$. In addition, it is helpful to add more discussions on the intuition of this assumption and why it makes sense in the considered problem.
2. There is a gap between the Algorithm 1 (Theoretical Version) and Algorithm 2 (Practical Version). The theorems were proved for Algorithm 1, whole all experiments were done based on Algorithm 2. If Algorithm 1 is not implementable in practice, why to introduce it? Can you prove the suboptimal gap bound for the Algorithm 2 directly?
3. A few highly related key references were only mentioned in Section A of the Appendix. For example, Zeng et al. (2024) also considered token-level DPO. Can authors add more discussions on the difference and the technical novelty beyond this paper?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review and support. Below are our response to your questions.
**Q1:** Assumption 3.1 is not a rigorous assumption. The parameters $A$ and
$\xi$ have not been defined. What are their possible ranges of values? The current statement is not an assumption without conditions $A$ and $\xi$. In addition, it is helpful to add more discussions on the intuition of this assumption and why it makes sense in the considered problem.
**A1:** The parameter $A = |\mathcal{A}|$ is the action set size (Line 220), and $\xi$ is a constant introduced in Assumption, to clarifiy this, we revise Assumption 3.1 as
Assumption 3.1. There exists a response $y = y_{1:H}$ satisfying $\pi^*(y|x) \ge A^{-\xi}$ for some $0 \le \xi \le H$.
We have also stated in the paper that "By the pigeon-hole principle, there must be a response $y$ such that $\pi^*(y | x) \ge A^{−H}$, implying that $\xi ≤ H$ naturally. In practice, $\xi$ is usually much smaller than $H$ because the language model tends to choose the optimal response rather than making a random guess."
More specifically, there are at most $A^H$ possible responses/trajectories $y = y_{1:H}$ (each step has $A$ actions (tokens) to choose from and the sequence length is at most $H$). Therefore, at least one response is chosen with a probability of at least $A^{-H}$. Moreover, given a prompt, the LLM will only generate several likely tokens with high probability, which makes the final generation probability of the most likely response significantly larger than the random guess probability $A^{-H}$. We denote this as $A^{-\xi}$ with $\xi \ll H$.
**Q2:** There is a gap between the Algorithm 1 (Theoretical Version) and Algorithm 2 (Practical Version). The theorems were proved for Algorithm 1, whole all experiments were done based on Algorithm 2. If Algorithm 1 is not implementable in practice, why to introduce it? Can you prove the suboptimal gap bound for the Algorithm 2 directly?
**A2:** We introduce the practical version of the theorem primarily to demonstrate that *token-level MDPs with KL constraints can be learned sample-efficiently*. We believe this constitutes a new and important theoretical advancement that deserves emphasis. We also acknowledge that proving the suboptimality gap bound for Algorithm 2 directly is challenging; in fact, even for standard PPO, its theoretical analysis often requires simplifications, such as replacing the GAE estimation with an optimistic estimation. Thank you for your question, and we will highlight our motivation for introducing the theoretical version and discuss its current limitations.
**Q3:** A few highly related key references were only mentioned in Section A of the Appendix. For example, Zeng et al. (2024) also considered token-level DPO. Can authors add more discussions on the difference and the technical novelty beyond this paper?
**A3:** Sure. We will include more discussion in the revision. Meng et al. (2024) propose SimPO by modifying the DPO objective, replacing the reference model with response length, and adding a margin threshold. Zeng et al. (2024) also consider a token-level reward and leverage this insight to develop token-level DPO, which performs better than the original DPO. These are beyond the scope of our work. In contrast, we utilize the implicit token-level reward provided by the original DPO as the dense token-level reward for RL training.
**Q4:** The authors used "DDPO" and "DPPO" in plots and discussions. Are they all different?
**A4:** Thanks for pointing out the typo and we will correct them. Both "DDPO" and "DPPO" indicate the baseline that uses RTO reward delayed to the last token. | Summary: Summary: The authors propose a token-level MDP formulation for LLM. post-training. They use the token-level action probabilities from a Direct Preference Optimization (DPO) trained LLM. Authors argue that the current formulation of LLM post-training is closer to a contextual bandit than it is to a reinforcement learning, and hence does not utilize the full power of the RL machinery, i.e. credit assignment, advantage function etc. Further, authors propose a practical implementation of their method: RTO, where practitioners can directly plug-in DPO token-level probability estimates in a standard PPO implementation.
## update after rebuttal
All reviewers seem to agree that this is an interesting paper for RL fine-tuning of LLM. My initial assessment of accept has not changed.
Claims And Evidence: The claims made by the authors, i.e. a token-level reward is better than sentence-level reward for LLM post-training is well supported by the experimental claims. Authors compare with both “RL-free” methods like (DPO, SimPO), and the standard online RL-method (PPO).
Methods And Evaluation Criteria: The proposed method is a very sensible extension of the existing sentence-level PPO fine-tuning, and the experimental setup is sound.
Theoretical Claims: I have checked the theoretical claims: Proposition 3.2 (sentence-level post-training has a much higher sample complexity than token-level post-training), Theorem 4.2 (Sub-optimality of token-level rewards), Eq. 4.7 (practical version of token-level rewards), and they all make sense to me.
Experimental Designs Or Analyses: The experimental design and the analysis of the proposed method RTO, including the sample efficiency, reward granularity are sound.
Supplementary Material: I did not verify the theoretical offline RL related proofs, since I am not super familiar with them.
Relation To Broader Scientific Literature: The paper improves the post-training method for LLMs, and given the significance of LLMs recently, the presented method can have a broad impact.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: 1. The paper is very well-written, especially the introduction and discussion of preliminary work. In my experience, most papers do not explicitly spell out that the most of the current setup for LLM post-training is in fact a contextual bandit setup, and not a MDP setup, so it's nice to see this assumption explicitly spelled out.
2. In section 3.3., authors prove that the sentence-level MDP formulation is sample inefficient in terms of sample complexity, and a token-level formulation is required.
3. The paper has a nice mix of both solid theoretical analysis, and practical algorithm for implementation. In my experience, some papers (on this topic) are either purely empirical, or purely theoretical. Nothing wrong with it, but it's nice to see a paper with a good mix of both.
Other Comments Or Suggestions: NA
Questions For Authors: 1. I have a question on the results. SimPO method outperforms PPO by a large margin (Table 1). Do you have any intuition for this? Literature seem to suggest that RL-free offline methods are outperformed by online methods, such as PPO, but the results in this paper seem to suggest differently.
2. Is Assumption 4.1 valid for only linear reward models, or does it also apply for LLM-based reward model (with \pi fixed, and \theta parameter)?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your review and support. Below are our response to your questions.
**Q1:** I have a question on the results. SimPO method outperforms PPO by a large margin (Table 1). Do you have any intuition for this?
**A1:** Our observation is that the RL-free methods are good at fitting the **style** over the deep RL methods, while the human/GPT-4 preference are easily hacked by the preference bias (see Figure 3 of [1] and related discussions). Compared to the DPO, SimPO further removes the KL constraint so it is more easily to achieve a higher in-domain score (AlpacaEval). However, we also notice that SimPO will also often hurt the OOD reasoning performance (see Table 9 of their paper for example).
[1] From Lists to Emojis: How Format Bias Affects Model Alignment
**Q2:** Literature seem to suggest that RL-free offline methods are outperformed by online methods, such as PPO, but the results in this paper seem to suggest differently.
**A2:** Indeed, almost all closed-source state-of-the-art LLMs, including ChatGPT, GPT-4, and the recent DeepSeek R1, are trained using RL-based methods like PPO and GRPO. However, the open-source community has struggled to replicate these RL training techniques effectively. Our work aims to narrow this gap by exploring both dense and sparse reward approaches. | Summary: The paper introduces Reinforced Token Optimization (RTO), a framework that integrates Direct Preference Optimization (DPO) and Proximal Policy Optimization (PPO) to improve Reinforcement Learning from Human Feedback (RLHF). The authors argue that existing RLHF implementations using PPO underperform due to a mismatch between sentence-level reward modeling (bandit formulation) and PPO’s requirement for token-wise rewards. RTO addresses this by reformulating RLHF as a Markov Decision Process (MDP) with token-level rewards. Key contributions include:
- MDP formulation
- RTO algorithm
- Theoretical guarantees
- Empirical results
Claims And Evidence: The claims are supported by theoretical analysis and empirical validation:
- MDP superiority: Theoretically justified via comparisons between bandit and MDP formulations (Section 3.3).
- RTO’s effectiveness: Experiments on standard benchmarks (AlpacaEval 2, Arena-Hard) and ablation studies (e.g., data scaling) validate performance gains.
- Token-wise reward extraction: The use of DPO to derive token-level rewards (Appendix D.1) is plausible but requires deeper scrutiny (see Q4/Q5).
Methods And Evaluation Criteria: - Methods: RTO’s integration of DPO and PPO is novel and addresses the token-wise reward gap in RLHF. The MDP formulation aligns with LLMs’ autoregressive nature.
- Evaluation: Benchmarks (AlpacaEval 2, Arena-Hard) are standard for LLM alignment. However, details on baseline implementations (e.g., PPO hyperparameters) and statistical significance are unclear.
Theoretical Claims: The paper claims RTO’s sample efficiency (Section 4) but lacks formal proofs. The theoretical analysis in Section 3.3 (MDP vs. bandit) is intuitive but not rigorously proven. Appendix B mentions a "near-optimal policy" guarantee but does not provide a full proof.
Experimental Designs Or Analyses: - Baselines: Comparisons with PPO, DPO, R-DPO, and SimPO are reasonable, but implementation details (e.g., reward models, KL penalties) are sparse.
- Data efficiency: The claim that RTO achieves PPO-level performance with 1/8 of the data is compelling but requires validation across multiple seeds.
- Ablation studies: Limited analysis of RTO’s components (e.g., DPO’s role in token-wise rewards).
Supplementary Material: Yes. Appendix D.1 discusses DPO’s principles, and Appendix B outlines theoretical foundations.
Relation To Broader Scientific Literature: The work builds on RLHF (Ziegler et al., 2019), PPO (Schulman et al., 2017), and DPO (Rafailov et al., 2023). It advances the field by addressing the token-level reward sparsity problem, a known limitation of PPO in RLHF.
Essential References Not Discussed: None
Other Strengths And Weaknesses: Strengths:
- Novel integration of DPO and PPO;
- strong empirical results;
- addressing a critical RLHF limitation.
Weaknesses:
- Theoretical gaps;
- limited ablation studies;
- insufficient implementation details for reproducibility.
Other Comments Or Suggestions: None
Questions For Authors: 1. Theoretical Proofs: Could you provide a complete proof for the sample efficiency claim in Appendix B?
2. DPO’s Role: How does DPO-derived token-wise reward correlate with human preferences? Is there empirical validation beyond benchmark scores?
3. Baseline Details: Were PPO and DPO baselines trained with identical hyperparameters (e.g., KL penalty, reward scaling)?
Responses to these questions could strengthen the paper’s theoretical grounding and empirical rigor.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper. Below are our responses.
**Question Regarding Theory:** Theoretical Proofs: Could you provide a complete proof for the sample efficiency claim in Appendix B?
**Response:** We have provided complete and rigorous proofs for both Proposition 3.2 and Theorem 4.2 in Appendix B. Should you have any specific questions or require further clarification, we are happy to address them. Regarding our extension mentioned in Remark B.3, we have acknowledged that this is a standard result in RL literature (see e.g., Theorem 2 in [1] for a detailed proof). Given that this standard extension is not central to the main claims of our paper, we opted to provide a concrete reference rather than including a detailed proof. We appreciate your understanding.
[1] Fast Global Convergence of Natural Policy Gradient Methods
with Entropy Regularization. https://arxiv.org/pdf/2007.06558
**Question Regarding Experiments:**(1) DPO’s Role: How does DPO-derived token-wise reward correlate with human preferences? Is there empirical validation beyond benchmark scores? (2) Baseline Details: Were PPO and DPO baselines trained with identical hyperparameters (e.g., KL penalty, reward scaling)? (3) Data efficiency: The claim that RTO achieves PPO-level performance with 1/8 of the data is compelling but requires validation across multiple seeds.
**Response:** Thank you for your question.
(1): We have added the following results to illustrate that DPO-derived token-wise rewards effectively capture human preferences. For the pairwise data, we measure consistency between human choices and DPO reward choices (reward implies an order). We observed a 79.23% consistency in the training data and 72.72% in the test data. Additionally, we want to emphasize that the performance of DPO-derived token-wise rewards used in reinforcement learning training, evaluated by widely recognized benchmarks, is a key standard for assessing its quality and is also the central focus of our work. To further illustrate the generalizability of DPO-derived reward, we've extended our methods to another RL algorithm REINFORCE++ [1].
The following table presents the Alpaca Eval 2 benchmark scores, further demonstrating the ability of DPO rewards to effectively capture human preferences and the broader applicability of our RTO method.
| Algorithm | AE2 LC | AE2 WR |
| -------- | -------- | -------- |
| REINFORCE++ | 18.28 | 13.91 |
| RTO (with REINFORCE++ objective) | 24.71 | 23.11 |
[1] REINFORCE++: A Simple and Efficient Approach for Aligning Large Language Models
(2): We have included the detailed hyperparameter choices in Appendix E. The DPO and PPO hyperparameters differ, and both sets were derived from well-tuned hyperparameters provided by the OpenRLHF project. For your convenience, we present important hyperparameters here: The KL penalty coefficient ($\beta$) is $0.01$ for PPO and $0.1$ for DPO. Other PPO hyperparameters are clip coefficient $\epsilon=0.2$ and GAE $\lambda=0.95$. We train a reward model for PPO, and another tiny reward model for RTO ($r_\text{MLE}$) following the standard practice. The reward scaling of PPO is $1$ for reward model and $\beta=0.01$ for KL reward penalty. For RTO, we use $\beta_3=1$ for reward model, $\beta_2=0.01$ for KL reward penalty, and $\beta_1=0.05$ for DPO reward.
(3): Thanks for the suggestions. Our implementation is based on OpenRLHF and our (also the community's) experience (across hundrends of LLM RL training in either preference learning or math/coding task) is that with the well-tuned recipe, the LLM RL is very stable and not greatly affected by the choice of random seed. As a result, we believe our findings are sufficient to demonstrate the sample efficiency of our claim. | Summary: The paper presents RTO, a novel reward learning method for tuning LLMs from preference data. Such a process involves two stages: reward modeling and an RL step (typically PPO). The main novelty of this paper lies in modifying the Bradley-Terry (BT) loss in the reward modeling step to yield a reshaped token-level reward instead of a single per-example reward. This allows the user authors to establish a more direct connection to the second RL step. This results in theoretical results for the regret under the linear reward assumptions. A nice feature of their theoretical approach is that it explicitly considers the KL-regularized version of RL used for LLM fine-tuning, but not so common in general RL.
Claims And Evidence: ### Claims
The authors main claims are that (1) they propose a framework for RLHF as an MDP; (2) demonstrate near optimal sample efficiency under this model; (3) achieve improvements in well-established benchmarks under the new reward model.
### Evidence
**Regarding (1)**
I think the contribution should be made more precise and I found it very confusing at first read.
It is true that the reward modeling step assumes a bandit structure, but this step is only for the reward learning stage, and the bandit structure itself is quite irrelevant. The RL (PPO) step in current RLHF is already doing RL over a token-level MDP by assigning a reward per token (with zero reward but the last token).
I also question that the current proposed reward modeling step is framed as an MDP. This is so because the MLE objective in (4.1) is still at the trajectory level. There is simply no way to mathematically define it for a partial trajectory. If it was truly at the transition level, I could apply it over a random sample of transitions. But their framework still requires to randomly sample over trajectoreis. Could the authors comment ton that?
As another comment, the authors themselves state that:
> ““There have also been previous works (Pacchiano et al., 2021; Chen et al., 2022; Wang et al., 2023; Li et al., 2023c; Zhan et al., 2023a) studying RLHF under the MDP framework, also known as dueling RL and preference-based RL. However, these works do not consider the KL constraint, which is an essential component of RLHF.”
Hence, the initial claim in (1) should be made consistent with this statement.
Ultimately, I think this paper should be framed more simply as a a reward modeling strategy for RLHF or a reward shaping strategy.
**Regarding (2)**
* I think this statement is slightly imprecise since the optimality bound of Theorem 4.2 depends on a tuning parameter lambda which is not part of the reward model r.
* I also don’t understand very well the value of Assumption 4.1 (Linear reward) since the authors proposing exactly how to fit the reward model under the proposed modified BT/DPO at token-level. Shouldn’t the assumption be true or false by construction?
**Regarding (3)**
The experiments look promising, but they are also quite limited since they only apply to baseline Llama 8B and a relative small number of baselines. See my comment on the experiments section.
Methods And Evaluation Criteria: Yes
Theoretical Claims: I skimmed over the theoretical derivation, but did not check them carefully.
Note that $\xi$ in Assumption 3.1 is never defined. What is it? Can you define it formally and provide an intuition since it is so import to the claim of sample complexity optimality?
Experimental Designs Or Analyses: Some additional experiments could improve the practical value of the method:
1. Experiment with additional base LLMs. Relying solely on Llama 8b is very limiting. Ideally there would more baseline models and a larger LLM.
2. Does the performance gains also hold if an alternative RL method for step 2 is applied (e.g. RLOO [1]). PPO is probably not the state of the art.
3. How sensitive is the performance to the hyperparameter $\beta_3$?
[1] Ahmadian, et al. (2024). Back to basics: Revisiting reinforce style optimization for learning from human feedback in llms. arXiv preprint arXiv:2402.14740, 2024.
Supplementary Material: I skipped the extension to the unknown transition model since it wasn’t relevant to the main claims of the paper.
Relation To Broader Scientific Literature: The paper is very timely given the centrality of LLM finetuning in modern AI methods across various fields.
Essential References Not Discussed: As far as I am aware, the paper conducts a comprehensive review.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: *Minor suggestion*: I was a bit thrown off by the phrase “After examining the open-source implementation of PPO” in the introduction. I eventually understood the intention of the authors. But first of all, I don’t know what open source has to do with it. Second, PPO is not an ambiguous algorithm; in fact, the authors are implementing it in their paper without modification. The real issue is the mismatch between the “sentence-level reward modeling stage” and the required token-level reward in PPO, which results in the potentially inefficient reward assignment to the last token only.
Questions For Authors: I have already included my questions in the other sections.
Additional questions:
1. How does the role of $\lambda$ hyperparameter in the theoretical version carries to the practical version?
2. If the main value of the framework is the reward shaping, could I obtain a similar performance by applying reward shaping techniques (e.g., potential based) to the sentence-level reward?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper. We summarize your questions and address them as follows.
**Q1:** MDP fomulation and Previous Theoretical Work
**A1:** We agree with the reviewer that the existing PPO is implemented in an *MDP with zero reward for all but the last token*, but we would like to claim that this is *essentially a bandit problem*. This point is explicitly stated in Section 3.5 of the seminal InstructGPT paper [1]. In contrast, a token-level MDP with dense rewards is a real multi-step decision-making problem (see Proposition 3.2 for a theoretical gap).
We will emphasize the importance of KL constraint in claim (1), to different from existing theoretical work on dueling RL. Moreover, we would like to emphasize that these works are purely theoretical works and does not lead to corresponding paractical algorithm leveraging dense rewards.
Finally, the token-level MDP terminology is mainly used to emphasize the use of dense rewards, in contrast to the bandit formulation with sparse rewards, which aligns with the reviewer's understanding. This token-level MDP was rigorously introduced first in our paper, and it is well recognized by the community [2]. Therefore, we prefer to maintain the terminology of token-level MDPs with KL constraints while highlighting the differences from existing bandit formulations and theoretical dueling RL work.
[1] Training language models to follow instructions with human feedback
[2] Value-incentivized preference optimization: A unified approach to online and offline rlhf
**Q2:** MLE in (4.1) at the trajectory level
**A2:** The MLE objective in (4.1) looks like at the trajectory level because the preference signal is usually given in the trajectory level and the cumulative token-level reward is the trajectory-level reward. However, even the trajectory-based preference reward can naturally induce a cumulative token-level reward. This is more evident for problems with step-wise structure like mathematical reasoning [3].
[3] Entropy-Regularized Process Reward Model
**Q3:** Theorem 4.2 depends on $\lambda$ and the role of $\lambda$
**A3:** The parameter $\lambda$ is just a regularization parameter for theory (ensure $\Sigma_{\mathcal{D}}$ has inverse), and our Theorem 4.2 holds for any $\lambda>0$. You can simply regard $\lambda = 1$.
**Q4:** Verify Assumption 4.1 by construction
**A4:** If we choose $\phi(s, a) = 1_{(s, a)}$ as the one-hot vector and $\theta_{(s, a)}=r(s, a)$, then the assumption is satisfied with $d = |\mathcal{S}| \times |\mathcal{A}|$. However, the reward function may exhibits some low-dimensional respresentation $\phi \in \mathbb{R}^d$ with $d < |\mathcal{S}| \times |\mathcal{A}|$, and thus we assume Assumption 4.1 with potentially small $d$ and do not make the explicit construction. This is also the main motivation of linear bandit and linear MDP.
**Q5:** Experiments: (1) limited base model and baselines. (2) application to other RL algorithms; (3) hyperparameter $\beta_3$?
**A5:** (1) Regarding baseline selection, we focused on the most relevant (DPO, PPO), strong (R-DPO, SimPO), and concurrent (SimPO, TDPO) methods. Given the limited performance of numerous other alignment algorithms reported in SimPO, we opted not to include them in our comparisons. Due to time limit, we are still trying to add experiments on other base models. We appreciate your understanding.
(2) We extend our experiments to REINFORCE++ (RF++) [5], an alternative RL algorithm. We selected RF++ since
- Unlike reasoning tasks, chat alignment typically does not involve sampling multiple responses per prompt like RLOO and GRPO.
- RF++ exhibits significant differences from PPO, notably the absence of a critic model.
The table below shows the power of RTO when applied to RF++.
| Algorithm | AE2 LC | AE2 WR |
| -------- | -------- | -------- |
| RF++ | 18.28 | 13.91 |
| RTO | 24.71 | 23.11 |
[4] REINFORCE++: A Simple and Efficient Approach for Aligning Large Language Models
(3) Since only relative ratio matters and to minimize tuning, we fix $\beta_2$ the same value PPO uses, choose $\beta_3 = 1$, and tune $\beta_1$ (see discussion below (4.7)). In our internal experiments we found our algorithm are quite robust to the choice of $\beta_1$. Thus we believe that conversely, RTO is not sensitive to the choice of $\beta_3$.
**Q6:** Reward shaping
**A6:** Our algorithm can be interpreted as a form of potential-based reward shaping, with $F(s, s' = (s, a)) = \Phi(s') - \Phi(s) = \log \frac{\pi(s')}{\pi_{\mathrm{ref}}(s')}-\log \frac{\pi(s)}{\pi_{\mathrm{ref}}(s)}=\log\frac{\pi(a|s)}{\pi_{\mathrm{ref}}(a|s)}$, where $\Phi$ is the potential function and $F(s, s' = (s, a))$ denotes the token-wise reward function added to each token. Thank you for the insightful question and we will emphasize this in the revision.
Regarding your minor suggestion, we would emphasize the issue of the reward modeling type directly, rather than mentioning the open-source implementation.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. My questions have been resolved so I have increased my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for reading our response. We truly appreciate your support and are glad we could address your questions. | null | null | null | null | null | null |
Stochastic Online Conformal Prediction with Semi-Bandit Feedback | Accept (poster) | Summary: The paper studies the problem of conformal prediction with semi-bandit feedback. In this setting, it formulates an online learning problem where the algorithm must generate conformal sets that maintain valid coverage guarantees in each round. The authors propose an algorithm that satisfies this requirement and demonstrate that it achieves sublinear regret under a regret definition based on score thresholds.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes, I checked all the proofs in the paper.
I believe there is a minor issue in the proof of Theorem 3.1. In the transition from line 285 to 287, the notation switches from $\overline{G}_t$ to $G_t$. If we use the equality $\overline{G}_t(\tau) = G_t(\tau) + \epsilon_t$, an additional application of the triangle inequality is required, which introduces an extra contribution of $\varepsilon_t$.
Going through the proof, it seems to me that the proof easily generalizes to the case where $\psi$ is $\alpha$-Holder for $\alpha \leq 1$. The regret should still be sublinear but with a rate slower than $\sqrt{T}$ and perhaps approaching $T$ as $\alpha \to 0$.
Experimental Designs Or Analyses: Yes, I think the experiments are good and comprehensive. One additional aspect I would like to see is a $\log$-$\log$ plot of regret along with the corresponding slope parameter to verify whether the rate is indeed smaller than $\sqrt{T \log{T}}$. From the results, it appears that the rate is approximately $\sim \sqrt{T}$, but a formal verification would be helpful. Alternatively, the authors could include a reference plot of $f(T) = \sqrt{T \log{T}}$ in Figure 1 for comparison.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The paper seems to improve on existing results.
Essential References Not Discussed: Not that I am aware of.
Other Strengths And Weaknesses: The paper is generally well-written and is easy to follow. The theoretical results are nice and verified with experiments.
Regarding potential weaknesses, I am not entirely convinced by the notion of regret as defined in the paper. Why is $\tau$ given such emphasis? It seems more like a means to an end rather than the ultimate objective. What truly matters is the size of the conformal sets. Would it be a big deal if the conformal set remains nearly the same for $\tau = \tau^{\star}$ and $\tau = 2\tau^{\star}$? A more natural metric might be the size of the conformal set in classification with finite labels or, for regression, its Lebesgue measure.
Other Comments Or Suggestions: N/A
Questions For Authors: What is $\epsilon_t$ in Lemma 4.2? I assume it corresponds to the $\epsilon_t$ defined before Equation (4), but it should be explicitly referenced to avoid ambiguity.
Additionally, why is Theorem 3.1 stated twice in the paper?
In the proof of Theorem 3.1, immediately after the line “By a union bound and by Lemma 4.1, we have”, what is $\delta$ here?
Assumption 2.1 seems to appear without sufficient justification. The authors state that ``A natural way to formalize the continuity assumption is based on the cumulative distribution function (CDF) of the scoring function” and then assume $\phi(\tau) = \psi(G^{*}(\tau))$. Why is this considered natural? A more intuitive assumption, in my view, would be to require that $\phi$ is a Lipschitz continuous function of $\tau$.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful comments. We address the questions and major concerns below:
**Concern 1: Line 285-287 Proof**
We apologize for the confusion. We will fix the typo in the revised draft. We also note that the final regret bound is correct because we add the additional $\epsilon_t$ in line 290.
**Concern 2: Reference Plot**
We thank the reviewer for the helpful advice. We updated Figure 1 to add $f(T) = \sqrt{T \log T}$ as a reference. We also take the log of the y-axis to improve readability. The figure is uploaded here:
<https://imgur.com/a/j05QwzS>
**Concern 3: Natural Metric**
It is correct that $\tau$ serves as a surrogate for prediction set size, as the prediction set size is monotonic in $\tau$. In practice, we find that this surrogate performs reasonably well, and we use a variant of it in our experiments. The reason we do not directly use prediction set size is explained in lines 148–150 (right column): specifically, because prediction set size is a discontinuous function, it does not satisfy Assumption 2.1. We have also conducted experiments on the convergence behavior of the prediction set size and would be happy to share the results.
Alternatively, a guarantee that existing conformal prediction algorithms provide is that the coverage rate is also not too much *higher* than desired, which shows that the algorithm is not overly conservative. Indeed, our regret guarantee can be applied to obtain such a bound. Specifically, consider the loss function $\phi(\tau) = 1-\min\\{G^*(\tau),G^*(\tau^*)\\}$ (the min is to ensure the step-wise regret is non-negative). On the event that $\tau_t\le\tau^*$ for all $t$ (which holds with high probability, see Lemma 4.2), then $\min\\{G^*(\tau_t),G^*(\tau^*)\\}=G^*(\tau_t)$, so the step-wise regret is
$$r_t := \phi(\tau_t) - \phi(\tau^*) = G^*(\tau^*) - G^*(\tau_t) = \mathbb{P}[\tau_t\le f(x_t,y_t^*)\le\tau^*].$$
In other words, the step-wise regret $r_t$ equals the overcoverage probability -- i.e., the probability that our algorithm covers $y_t^*$ when the "oracle" algorithm (which knows $\tau^*$) does not. Then, by Theorem 3.1, we have $r_t = \mathbb{P}[\tau_t\le f(x_t,y_t^*)\le\tau^*] = O(\sqrt{\log t/t})$, implying that the overcoverage probability converges to zero. We are happy to add a discussion of this result to our paper.
**Question 1: $\epsilon_t$ in Lemma 4.2**
It is correct that $\epsilon_t$ is defined in Equation (4). We will include this definition in the statement of the lemma in the revised draft to avoid any ambiguity.
**Question 2: Theorem 3.1 Stated Twice**
We apologize for the confusion. We re-stated the theorem in the proof section to improve its readability. We are happy to remove it.
**Question 3: $\delta$ in Lemma 4.1**
We define $\delta = 1/T^2$ in line 199-200 (right column). We will include this definition in the revised proof to avoid future confusion.
**Question 4: Assumption 2.1**
We meant natural in the sense that it naturally captures relevant structure in our problem, not necessarily that it is a natural assumption. Specifically, this assumption is a natural way to avoid the pathological cases discussed in lines 143–150. To summarize, consider a CDF such that $G^*(\tau_0) = G^*(\tau^*) = 1 - \alpha$ with $\tau_0 < \tau^*$. Recall that $\tau^* = \sup\\{\tau \in R: G^*(\tau) \le 1 - \alpha\\}$. Since we almost surely never observe any scores between $\tau_0$ and $\tau^*$, it becomes extremely difficult to identify $\tau^*$ from $[\tau_0,\tau^*]$ with a finite sample. Even a small error in estimating the empirical CDF---e.g., $G_t(\tau^*) = G^*(\tau^*) + \epsilon$---can result in significant regret. We apologize for the confusion, and will clarify our explanation of this assumption.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. Including a brief discussion of your reply to my Concern 3 in the main text would be helpful and appreciated.
Regarding Assumption 2.1, thank you for the clarification. That said, the assumption still feels somewhat abrupt and lacks sufficient context. I would recommend adding a discussion on whether similar assumptions have been used in prior work. If possible, it would also be valuable to outline what an ideal structural assumption on $\phi$ might look like, acknowledge the limitations of the current formulation, and perhaps pose this as an open question for future work.
I would like to retain my original score. | Summary: The paper introduces a novel algorithm for online conformal prediction in settings where only partial feedback is available, specifically semi-bandit feedback. Conformal prediction is a framework for uncertainty quantification that outputs prediction sets with guaranteed coverage probabilities. The key challenge addressed in this work is the lack of a large calibration dataset, which is typically required for conformal prediction. Instead, the authors consider an online setting where examples arrive sequentially, and the true label is only observed if it is contained in the prediction set.
The proposed algorithm, Semi-bandit Prediction Set (SPS), dynamically constructs prediction sets over time. It uses a high-probability upper bound on the cumulative distribution function (CDF) of the scoring function to ensure that the prediction sets maintain the desired coverage rate. The algorithm guarantees sublinear regret, providing strong theoretical guarantees and empirical performance. The algorithm is applicable to a wide range of real-world tasks, including document retrieval, image classification, and auction price-setting.
Claims And Evidence: The claims made in the submission are well-supported by both theoretical proofs and empirical results. The authors provide clear evidence for the effectiveness of their algorithm in maintaining the desired coverage rate, achieving sublinear regret, and ensuring zero undercoverage. The empirical evaluation is thorough, and the theoretical analysis is rigorous, making the claims convincing and credible.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited for the problem of online conformal prediction with semi-bandit feedback. The algorithm is designed to handle the challenges of this setting, and the evaluation criteria align with the goals of the application. The benchmark datasets and baselines are appropriate and provide a thorough evaluation of the algorithm’s performance.
Theoretical Claims: The inductive step in Lemma 4.2 could be more detailed. Specifically, it would be helpful to explicitly state how the inductive hypothesis is used which is not so clear to me.
Overall, the theoretical analysis is rigorous and supports the claims made in the paper.
Experimental Designs Or Analyses: The experimental design and analyses are sound and valid. The tasks, datasets, baselines, and metrics are well-chosen and relevant to the problem of online conformal prediction with semi-bandit feedback.
Supplementary Material: yes, the whole part.
Relation To Broader Scientific Literature: The key contributions of the paper are closely related to the broader scientific literature in conformal prediction, online learning, and bandit feedback.
Essential References Not Discussed: no
Other Strengths And Weaknesses: Strengths:
Providing the first regret guarantee for online conformal prediction with semi-bandit feedback.
Weaknesses:
The writing should be improved and well-organized.
Other Comments Or Suggestions: See the Theoretical Claims above.
Questions For Authors: See the Theoretical Claims above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful comments. We address the questions and major concerns below:
**Concern 1: Lemma 4.2**
We apologize for the confusion. A more detailed proof is included below:
*Base case:* when $t = 1$, we set $\tau_1 = -\infty$. Thus, $\tau_1 \le \tau^*$.
*Inductive Hypothesis:* Assume $\tau_k \le \tau^*$ for some $k \ge 1$.
We prove $\tau_{k+1} \le \tau^*$ under our assumption $\sup_{\tau \in R}|G_t(\tau) - G_t^*(\tau)| \le \epsilon_t$ for all $t \in [T]$. Note that $\tau_k \le \tau^*$ by the inductive hypothesis, so we have $G_k^*(\tau) = G^*(\tau)$ for all $\tau \ge \tau^*$. Thus, we have
$$
|G_k(\tau^*) - G^*(\tau^*)| = |G_k(\tau^*) - G_k^*(\tau^*)| \le \sup_{\tau \in R}|G_k(\tau) - G_k^*(\tau)| \le \epsilon_k,
$$
where the last inequality follows from our assumption.
Thus, we have $\overline{G}_k(\tau^*) = G_k(\tau^*) + \epsilon_k \ge G^*(\tau^*) = 1 - \alpha$. Recall we define
$$
\tau_{1-\alpha, k} = \sup\\{\tau \in R \mid \overline{G}_k(\tau) \le 1 - \alpha\\};
$$
thus, we have $\tau_{1-\alpha, k} \le \tau^*$. Since $\tau_{k+1} = \max\\{\tau_{1-\alpha, k}, \tau_k\\}$ with $\tau_{1-\alpha, k} \le \tau^*$ and $\tau_k \le \tau^*$, we have $\tau_{k+1} \le \tau^*$.
We are happy to further elaborate on any portion of the proof. | Summary: The paper proposes a novel stochastic online conformal prediction algorithm designed for constructing prediction sets under semi-bandit feedback. It addresses the challenge of sequential decision-making where feedback is limited to partial information (e.g., only observing certain bids or labels). The algorithm ensures desired coverage rates while achieving sublinear regret and maintaining zero undercoverage count. The authors demonstrate the algorithm's effectiveness across three tasks: image classification (ImageNet), document retrieval (SQuAD), and second-price auctions. The key contributions include a new approach to conformal prediction that is robust to semi-bandit feedback and does not require hyperparameter tuning. Experimental results show that the proposed method outperforms existing strategies in terms of regret, coverage rate, and undercoverage.
Claims And Evidence: Yes, the claims made in the submission are supported by convincing evidence. The paper provides experimental results across multiple tasks (ImageNet, SQuAD, second-price auctions) demonstrating that the proposed algorithm outperforms existing methods (e.g., ACI, greedy, and DLR) in terms of cumulative regret, coverage rate, and undercoverage count. The results are backed by quantitative metrics and comparisons, including regret, coverage rate, and undercoverage count, which show the algorithm’s superior performance.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are well-suited for the problem. The use of benchmark datasets like ImageNet, SQuAD, and second-price auctions aligns with the tasks being evaluated (image classification, document retrieval, and auction reservation price prediction). The evaluation criteria, including cumulative regret, coverage rate, and undercoverage count, effectively measure the algorithm's performance in achieving reliable prediction sets under stochastic semi-bandit feedback.
Theoretical Claims: High-level the proofs presented, particularly the bounds on cumulative regret and coverage rate, appear mathematically sound based on the assumptions and lemmas provided, but want to acknowledge that I did not check everything in details for the correctness of the proofs.
Experimental Designs Or Analyses: based on the description, the experimental setup appears appropriate for evaluating the proposed method, with tasks like image classification, document retrieval, and second-price auction reservation price prediction used to assess the algorithm's performance across different domains. The metrics (cumulative regret, coverage rate, and undercoverage count) are well-suited for the problem.
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper's key contributions—developing a stochastic online conformal prediction algorithm with semi-bandit feedback—build upon and extend prior work in conformal prediction, online learning, and semi-bandit feedback. Specifically, it relates to prior methods like Adaptive Conformal Inference (ACI), Decaying Learning Rate (DLR), and greedy strategies, highlighting the limitations of these approaches (e.g., failure to maintain coverage rate and suboptimal regret). The paper builds on existing work in these areas by proposing a method that guarantees coverage rate, sublinear regret, and zero undercoverage, addressing issues like biased updates and slow convergence in existing algorithms. This contributes to improving prediction set reliability and uncertainty quantification in various machine learning tasks.
Essential References Not Discussed: The paper sufficiently cites foundational works such as ACI, DLR, and greedy algorithms, but it could benefit with further discussion on more recent advancements in conformal prediction and semi-bandit feedback. Specifically, recent work on the adaptation of conformal prediction methods to deep learning models (such as those applied to large-scale image or text datasets) could further strengthen the background context.
Other Strengths And Weaknesses: Strengths:
- The paper presents an original approach by combining conformal prediction with semi-bandit feedback, addressing a gap in the existing literature.
- It demonstrates a practical application to real-world tasks (image classification, document retrieval, second-price auctions) which adds significant relevance.
- The experimental results are thorough, showing that the proposed method outperforms several baselines, particularly in maintaining coverage and achieving sublinear regret.
Weaknesses:
- While the proposed method is effective, the paper could benefit from additional comparative analysis against a broader set of methods or more challenging scenarios.
Other Comments Or Suggestions: - The paper would benefit from a clearer explanation of the key assumptions and how they relate to the experimental setup.
- It would be useful to provide a more detailed discussion on the limitations of the proposed method, especially in edge cases or different types of feedback.
Questions For Authors: How does your proposed algorithm handle extreme cases where the semi-bandit feedback is highly noisy or sparse? A more detailed explanation of its robustness in such scenarios would clarify its practical applicability.
Can you provide a more in-depth explanation of the trade-off between computational efficiency and the desired coverage rate in your method, especially when dealing with large-scale datasets?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful comments. We address the questions and major concerns below:
**Concern 1: Additional Comparative Analysis**
We appreciate the reviewer’s suggestion. In both our discussions and experiments, we have considered a comprehensive set of popular online conformal prediction algorithms from recent literature that operate in similar contexts, and we demonstrate that our algorithm outperforms them in the semi-bandit setting. We believe these scenarios already demonstrate challenging and practical applications of our techniques, but we are happy to consider additional scenarios the reviewer might have in mind.
**Concern 2: Key Assumptions**
Assumption 2.1 rules out cases where the reward varies substantially in regions where the CDF $G^*$ is very flat. In such regions, the number of observed samples is typically small. Therefore, if $\tau^*$ lies in a flat region of $G^*$ and the reward fluctuates significantly, the accrued regret can be large. This type of regularity condition is common in regret analyses within the bandit literature, where some assumptions on the reward mechanism are required to ensure meaningful guarantees (e.g., see [1]).
The bounded reward assumption in Assumption 2.2 typically holds in practice. First, since the CDF $G^*$ is a bounded function, it is natural to assume that the reward $\phi(\tau) = \psi(G^*(\tau))$ is also bounded. Alternatively, if we consider $\phi$ as representing the prediction set size, the assumption still holds because prediction set sizes are inherently bounded. The bounded reward condition is also standard in regret analyses (e.g., see [2]).
Next, we show that Assumption 2.3 can be removed with some additional technical work. We begin by redefining $\tau^*$ as $\tau^* = \sup\\{\tau: G^*(\tau) \le 1 - \alpha\\}$, and let $1 - \alpha' = G^*(\tau^*)$. The proof then proceeds almost identically as before, with $1 - \alpha'$ substituted for $1 - \alpha$. The only complication is that $\tau_t$ is still computed using $\alpha$. However, since the DKW inequality applies to arbitrary CDFs, we can still show that $\tau_t \le \tau^*$ with probability at least $1/T^2$. As a result, we obtain that $G(\tau_t) \to 1 - \alpha'$, i.e., our algorithm maintains and converges to an $\alpha'$-coverage rate. With these modifications, our regret guarantee continues to hold.
In the experiments section, we choose a loss function (lines 374–377, right column) that satisfies Assumptions 2.1 and 2.2. Since relaxing Assumption 2.3 does not impact our regret or coverage guarantees, we use the observed empirical distribution of scores as the ground-truth distribution.
[1] Kleinberg, Robert, Aleksandrs Slivkins, and Eli Upfal. "Multi-armed bandits in metric spaces." In Proceedings of the fortieth annual ACM symposium on Theory of computing, pp. 681-690. 2008.
[2] Auer, P. "Finite-time Analysis of the Multiarmed Bandit Problem." (2002).
**Concern 3: Limitations**
One limitation of our algorithm is that it currently operates under the i.i.d. assumption, whereas several existing online conformal prediction approaches are designed to work in adversarial setting; however, these existing techniques cannot handle semi-bandit feedback. With respect to different feedback mechanisms, our algorithm also functions in the full-information feedback setting without modification; however, in this case, there may be more effective strategies than our algorithm. Finally, one might also consider bandit (instead of semi-bandit) feedback, but it is not clear what that would look like in the conformal prediction setting. We are happy to add a discussion to our paper.
**Question 1: Noisy and Sparse Labels**
In general, conformal prediction naturally handles label noise, since the ground truth label $y^*$ is assumed to be a random variable even conditioned on $x$. In terms of "extreme cases", our coverage and regret guarantees still hold but the prediction sets may be large. Regarding sparsity, we note our conformal prediction setting is special, and our semi-bandit feedback cannot be sparse -- we only fail to observe the true label when it is miscovered, which happens with probability at least $\alpha$. We are happy to add a discussion to our paper.
**Question 2: Computation Efficiency**
As our algorithm only needs access to the empirical score distributions, the computational efficiency does not depend on the choice of the coverage rate $\alpha$. For the same reason, the scaling of our algorithm does not depend on the data dimensionality except through how long it takes to evaluate the predictive model. Since it is an online algorithm, its computation time scales linearly in the size of the dataset (with constant compute per iteration) and uses constant memory. These scalability properties are similar to existing conformal prediction algorithms. We are happy to add a discussion to our paper. | Summary: This paper introduces an online conformal prediction method with semi-bandit feedback (i.e. we only observe the true label if it is contained in the prediction set). Authors show that, under the iid setting, their method controls the expected cumulative regret and that $\tau_t$ (the threholding value at time $t$) converges to $\tau^*$ the optimal one at $t$ goes to infinity. More importantly, they show that $\tau_t \leq \tau^*$ for all $t$ with high probability. They finally We evaluate the algorithm on several task to empirically show that they achieve performance compared to several baselines.
Claims And Evidence: Yes, the claims are clear. However, in my opinion, the iid assumption is an important one (especially in online CP) and should appear in the theorem.
Methods And Evaluation Criteria: Yes, the evaluation makes sense.
Theoretical Claims: Yes, I partially check the proofs and it seems that this is ok.
Experimental Designs Or Analyses: Yes, the experimental design seems correct to me.
Supplementary Material: .
Relation To Broader Scientific Literature: The article assumes an iid framework, which is a bit different from the previous article on online CP. Furthermore, my main concern on this point is that I don't understand how this article compares to [1]. My first impression is that [1] does the same thing as this article but better (I think the semi-bandit feedback can be easily handled by [1]'s technique). But maybe I am wrong and I would like to hear the authors' opinion on this point.
[1] “On-Line Confidence Machines Are Well-Calibrated” by Vladimir Vovk.
Essential References Not Discussed: No, everything is ok (except maybe [1] ; see previous point).
Other Strengths And Weaknesses: Strenghts:
1\ The subject is of huge interest for the CP community.
2\ The paper is well written and quite easy to follow.
Weaknesses:
1\ The paper only consider the iid setting
2\ The $f$ function is not time-dependent. In a real-life scenario, we probably want to use the new points to improve our $f$ function.
3\ This point could also be a strenght (depending on your point of view), but the method is in the end an application of DKW inequality (reason why I think that [1] is better -- but I could be wrong).
4\ The figures are not very easy to read. Maybe a logarithmic scale or something like that would be better.
5\ The code is not provided.
Other Comments Or Suggestions: Minor:
1\ Line 75: "Angelopoulos et al. (2024b) points out that ACI still works if we take the “quantile function” to be the identity function .." I do not understand this remark. Do you mean that the update in ACI can be made on $q_t$ and not on $\alpha_t$?
2\ Line 84: "we show that a standard variation the scoring function" problem in the sentence.
3\ Line 104: "convergence of $\tau_t$ to $\tau^*$" They are never defined before this paragraph. Therefore, on first reading, we cannot understand this statement. Overall, the text sometimes refers to mathematical objects that are not yet defined.
4\ Line 433: "hyperparamters"
Questions For Authors: 1\ My first concern is about the paper [1]. It seems to me that because you assume that the data are iid, then the method of [1] is better bu maybe I am wrong. Can you elaborate on this?
2\ Throughout the paper, $f$ is fixed at the beginning. Do you think it is possible to “relearn” the score function as new points are observed?
3\ Line 213 "As a consequence, our algorithm converges to the true $\tau^*$". Why? can you elaborate on this? Furthermore, do you think that it is possible to obtain a rate of convergence?
4\ In general, papers in online CP control the FCP. Do you think it could be possible to have such result here? (Perhaps it is trivial due to the fact that $\tau_t \leq \tau^*$).
5\ Can you give some important examples on $\phi$?
6\ In the experiment, it seems that the method is relatively conservative with coverage around 0.95 even with $T=10000$ points. Can you elaborate on this?
7\ Minor question: In the online literature, they also control dynamic regret. Do you think it is possible to do that here?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the thoughtful comments; we address major concerns below.
**Concern 1: Comparison to [1]**
By our understanding, the algorithm in [1] cannot be straightforwardly modified to handle semi-bandit feedback. Specifically, they use a strangeness measure $A_t:(z_1,...,z_t)\mapsto(\alpha_1,...,\alpha_t)$ (Eq (8) in [1]) to construct prediction sets, where $z_i=(x_i,y_i)$ is a calibration example; e.g.:
$$\alpha_i=\frac{\min_{j\neq i:y_j=y_i}d(x_i,x_j)}{\min_{j\neq i:y_j\neq y_i}d(x_i, x_j)}.$$
Clearly, computing $\alpha_i$ requires the true label $y_i$. Next, given a new input $x_t$, they return the prediction set of all labels $y$ such that $\alpha$ satisfies
$$\frac{\\#\\{i=1,...,t:\alpha_i\ge\alpha\\}}{t} \ge\delta,$$
where
$$(\alpha_1,...,\alpha_{t-1},\alpha)=A_t((x_1,y_1),...,(x_{t-1},y_{t-1}),(x_t,y))$$
is the strangeness measure assuming the true label for $x_t$ is $y$ (Eqs (9) to (11) in [1]). Thus, to construct $\alpha$, they require $\alpha_i$ for all $i$, so all true labels $y_i$ are required. This requirement persists in their modifications (Eqs (16), (20)–(21) in [1]).
Our main contribution is precisely to handle semi-bandit feedback. We will add a discussion.
**Concern 2: i.i.d.**
We believe the i.i.d. assumption is reasonable in our active learning setting. We agree that extending our work to the adversarial setting is an important direction, but it is beyond the scope of our paper.
**Concern 3: Re-learn $f$**
We believe there are practical settings where training is decoupled from calibration, especially when training is costly (e.g., LLMs). One of our applications involves using a pretrained document retriever in a new domain where fine-tuning may be infeasible. However, we agree that extending to settings where the model $f$ is updated is important and will add a discussion.
**Concern 4: Simple use of DKW**
As discussed above, we do not believe [1] can easily be adapted to semi-bandit feedback. Also, we emphasize that a naïve application of DKW is insufficient; we have introduced additional novel techniques to handle semi-bandit feedback.
**Concern 5: Image Readability**
We have updated Figure 1 to use a log scale:
<https://imgur.com/a/j05QwzS>
**Q3: Line 213 $\tau_t$ convergence**
Our convergence guarantee is based on the fact that the empirical CDF converges to the true CDF, i.e., $\overline{G}_t$ converges to $G^*$ for $\tau\in[\tau^*,+\infty)$. For intuition, consider full feedback. Let $G'_t$ denote the empirical CDF. Similar to Eq (4), define
$$\overline{G}'_t(\tau)=G'_t(\tau)+\epsilon_t$$
where $\epsilon_t=\sqrt{\log(2/\delta)/2t}$, and choose
$$\tau_t=\sup\\{\tau\in\mathbb{R}\mid\overline{G}'_{t-1}(\tau)\le1-\alpha\\}.$$
By DKW, $\overline{G}'_t(\tau^*)$ converges to $G^*(\tau^*)$, so $\tau_t$ converges to $\tau^*$. The convergence rate $O(t^{-1/2})$ comes from DKW.
With semi-bandit feedback, convergence is not guaranteed a priori. However, Lemmas 4.1 and 4.2 show our CDF upper bound $\overline{G}_t(\tau^*)$ still converges to $G^*(\tau^*)$. Thus, $\tau_t$ converges to $\tau^*$, with the same convergence rate.
**Q4: FCP**
Apologies, we are not sure what "FCP" means. The false negative rate (FNR) is controlled by the coverage guarantee (assuming a false negative is a miscovered example). We are not aware of an analog of false positive rate (FPR) in conformal prediction. One guarantee of existing conformal prediction algorithms is the coverage rate is also not too much higher than desired, i.e., they are not overly conservative. Our regret guarantee provides such a bound. Consider the loss $\phi(\tau)=1-\min\\{G^*(\tau),G^*(\tau^*)\\}$ (the min ensures the step-wise regret is non-negative). On the event $\tau_t\le\tau^*$ for all $t$ (which holds with high probability by Lemma 4.2), the step-wise regret is
$$r_t=\phi(\tau_t)-\phi(\tau^*)=G^*(\tau^*)-G^*(\tau_t)=\mathbb{P}[\tau_t\le f(x_t,y_t^*)\le\tau^*],$$
i.e., $r_t$ is the overcoverage probability that we cover $y_t^*$ but the oracle does not. By Theorem 3.1, $r_t=O(\sqrt{\log t/t})$, so the overcoverage probability converges to zero. We will add a discussion.
**Q5: Example on $\phi$**
Note that Algorithm 1 does not depend on $\phi$; it is introduced solely to evaluate the algorithm's performance. The example in our experiments is a practical choice; the overcoverage rate above is another.
**Q6: Conservativeness**
This is a consequence of our PAC guarantee and semi-bandit. Our algorithm ensures $\alpha$-coverage at every $t$, which implies $\tau_t\le\tau^*$ for all $t$. However, as shown in our regret analysis, $\tau_t$ is guaranteed to converge to $\tau^*$, with convergence rate given by DKW.
**Q7: Dynamic Regret**
Since we consider the i.i.d. setting, the optimal policy is static (i.e., $\tau_t=\tau^*$), so our regret equals dynamic regret. We believe extensions to the adversarial setting (where static and dynamic regret are distinct) is beyond the scope of our paper. | null | null | null | null | null | null |
Improving Transformer World Models for Data-Efficient RL | Accept (poster) | Summary: This paper presents a technically sound and empirically robust contribution to MBRL, with clear innovations in tokenization and transformer training. While the Craftax-centric evaluation limits immediate generalizability, the methodological advancements (NNT, BTF) are likely to inspire follow-up work. The paper is suitable for acceptance provided the authors address the above concerns, particularly by:
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: NA
Experimental Designs Or Analyses: Yes
Supplementary Material: No
Relation To Broader Scientific Literature: They would potentially be useful for real-world applications (e.g., robotics, automated exploration).
Essential References Not Discussed: NA
Other Strengths And Weaknesses: + This paper presents a technically sound and empirically robust contribution to MBRL, with clear innovations in tokenization and transformer training. While the Craftax-centric evaluation limits immediate generalizability, the methodological advancements (NNT, BTF) are likely to inspire follow-up work. The paper is suitable for acceptance provided the authors address the above concerns, particularly by:
+The incremental "ladder of improvements" (Table 1) and detailed ablation analyses (Table 2, Figure 5) clearly demonstrate the contribution of each component (Dyna with warmup, NNT, BTF). The sensitivity analysis of patch size and warmup duration provides actionable insights.
+The nearest-neighbor tokenizer (NNT) and block teacher forcing (BTF) are novel and well-motivated. NNT’s static codebook addresses non-stationarity in VQ-VAE training, while BTF’s parallel token prediction mitigates autoregressive drift. These innovations are backed by quantitative metrics (symbol accuracy, reconstruction error) and qualitative rollout comparisons (Figure 6).
+The MBRL agent’s training time (759 minutes on 8 H100 GPUs) is competitive compared to prior work (e.g., IRIS: 18330 minutes). The MFRL baseline’s efficiency (15 minutes) further underscores the practicality of the approach.
-The evaluation is restricted to Craftax-classic and Craftax Full. While the environment’s complexity is a strength, the paper does not validate whether the proposed techniques (e.g., patch-based NNT) generalize to other domains (e.g., 3D environments, non-grid-based tasks). A discussion of applicability to broader settings (e.g., Minecraft, Atari, robotics) would strengthen the contribution, since the required codebook’s size for NNT would increase significantly in first-person-view settings, which may affect the stability of training TWM. While the empirical results are compelling, the paper lacks theoretical analysis of why BTF or NNT improve performance. For instance, how does BTF’s block causal attention reduce compounding errors compared to autoregressive sampling? This paper states that BTF improves performance by returning a more accurate TWM compared to AR methods. However, the observed gains may instead stem from more efficient representations learned through future state reasoning, rather than solely from improved accuracy. This distinction is not explicitly validated, as the ablation study does not include a direct comparison of dynamic losses. A deeper connection to existing theory (e.g., bias-variance trade-offs in world models) would add rigor.
-The NNT assumes aligned patches (e.g., 7×7 grids), which aligns with Craftax’s design but may not generalize to environments with less structured observations (e.g., raw pixel inputs in Atari). In such cases, NNT may allocate excessive computational resources to decoding background information, potentially overlooking small, reward-relevant objects. The paper briefly acknowledges this limitation but does not explore workarounds (e.g., adaptive patching).
- While the paper compares to IRIS and DreamerV3, it omits discussion of concurrent MBRL advances (e.g., Diamond's diffusion models, TD-MPC2’s continuous control). A broader literature review would better contextualize the work.
Other Comments Or Suggestions: Key implementation details (e.g., GRU architecture, codebook initialization for NNT) are briefly described but lack specificity. Public code or an appendix with full hyperparameters would aid reproducibility.
The human expert baseline (65.0% reward) is derived from 100 episodes by 5 players, but the paper does not clarify how this was standardized (e.g., episode length, interaction constraints). More details would strengthen the comparison.
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive evaluation of our work. We address your questions and comments below:
**Weaknesses **
(1) As detailed in our rebuttal to Reviewer fn71, we successfully trained an MBRL agent on the MinAtar benchmark, reusing our core MBRL components with little hyperparameter tuning. This agent significantly outperformed a tuned MFRL agent in these environments, highlighting the potential transferability of our approach to other grid-world settings.
(2) We appreciate your suggestion for a theoretical comparison between the autoregressive and block teacher forcing (BTF) settings. We agree that this would be a valuable avenue for future investigation.
(3) To quantify the impact of BTF in dynamics learning, we measured the average cross-entropy loss of the observation tokens over the last 500,000 training steps. Our TWM without BTF achieves an average CE of 0.478 (\pm 0.04), while our best TWM with BTF reaches 0.432 (\pm 0.004). This difference in learning dynamics suggests that BTF facilitates learning, likely by enabling the reuse of intermediate computations for next token predictions.
(4) We agree that our proposed nearest neighbor tokenizer (NNT) is likely best-suited for grid-world environments. For environments with raw pixel inputs, such as Atari and Procgen, we believe that alternative tokenization methods like VQ-VAE and its variants will be necessary. We are actively exploring this direction. Our other two methods, Dyna with warmup and Block Teacher Forcing, should remain applicable in these settings.
(5) Please note that we discuss TD-MPC2 and Diamond in our related work Section 2, including in our footnote 2.
**Other comments or suggestions**
(1) We would like to direct your attention to Appendix A, specifically Tables 4, 5, and 6, where we have tried to present all the hyperparameters used in our MBRL pipeline.
(2) The human expert results utilized in this work have been extracted from the original Crafter paper. As detailed in Section 4.4 of [1], this dataset comprises 100 gameplay episodes recorded from five human experts who were given game instructions and several hours of practice prior to recording.
**References** [1] Hafner, D. Benchmarking the spectrum of agent capabilities.
arXiv preprint arXiv:2109.06780, 2021. | Summary: This paper proposes an approach for model-based reinforcement learning (MBRL) to improve sample efficiency and performance on the challenging Craftax-classic benchmark.
The method includes three improvements for both policy and transformer world model (TWM), which are “Dyna with warmup”, “nearest neighbor tokenizer” and “block teacher forcing”.
Claims And Evidence: Yes, the claims are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes, the proposed method makes sense.
Theoretical Claims: The paper does not present any formal theoretical proofs.
Experimental Designs Or Analyses: I have checked the experimental designs in Section 4 and Appendix B, C & D.
Supplementary Material: I have checked the algorithmic details in Appendix A, and the experiments in Appendix B, C & D.
Relation To Broader Scientific Literature: The key contribution of this paper is the three improvements for model-based reinforcement learning with a transformer world model:
1. “Dyna with warmup” follows the classic Dyna [1] setting.
2. The “nearest neighbor tokenizer” method is an improvement on the VQ-VAE in IRIS [2].
3. The “block teacher forcing” method modifies the attention mask of the GPT structure so that it can predict tokens in parallel, in a similar way as in REM [3].
[1] Richard S. Sutton, et al. “Dyna, an integrated architecture for learning, planning, and reacting.” ACM Sigart Bulletin (1991).
[2] Ajay Mandlekar, et al. “Transformers are sample-efficient world models." ICLR (2023).
[3] Lior Cohen, et al. "Improving Token-Based World Models with Parallel Observation Prediction." ICML (2024).
Essential References Not Discussed: One of the key contributions is the parallel prediction of observation tokens, but a similar parallel prediction method is also proposed, namely Algorithm 1 & 2 from REM [1] published in ICML 2024.
[1] Lior Cohen, et al. "Improving Token-Based World Models with Parallel Observation Prediction." ICML (2024).
Other Strengths And Weaknesses: * Strengths
1. This paper is well-written with clear motivation.
2. The three changes are sensible and effective, achieving significant performance gains in a Craftax-classic environment.
3. The experiments are solid, especially the ablation of the components.
* Weaknesses
1. The method contains too many hyperparameters, and it is difficult to determine which hyperparameters are important, making it difficult to transfer to other environments.
Other Comments Or Suggestions: NA
Questions For Authors: 1. According to Figure 1, using only PPO outperforms other baseline MBRL methods, most of which are based on REINFORCE. What would happen if you also build upon REINFORCE?
2. I am a bit confused about the NNT implementation, it looks like in the worst case NNT leads to unbounded memory overhead. How do you avoid this problem?
If you respond to these questions and address these concerns, I'll be willing to raise the score.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Essential References Not Discussed:**
We thank you for pointing out the REM reference: we will include it in our revised paper. You are right that both block teacher forcing (BTF) and REM predict all the next frame tokens jointly. However, while REM uses a retentive network, BTF is applicable to a broader range of transformer architectures. BTF achieves joint prediction through a modification of the causal mask and the supervision signal, as illustrated in our Figure 3 and detailed in Appendix A.2.2.
**Weaknesses:**
We acknowledge that our model-based RL approach, like many others in this domain, involves a significant number of hyperparameters due to the interaction of its various components: the actor-critic policy, the tokenizer, and the world model.
To provide clarity, we have dedicated separate appendices to detail each of these components: Appendix A.1 for the actor-critic policy, Appendix A.2.1 for the tokenizer, and Appendix A.2.2 for the world model. Table 6, Appendix A.3.4 summarizes the main MBRL parameters which glue these parameters together.
Our supplementary experiments on the MinAtar environments, detailed in our rebuttal to Reviewer fn71, indicate that the majority of our proposed pipeline transfers to these other grid-world environments.
**Questions:**
(1) We opted to primarily utilize PPO due to its well-established advantages in terms of stability and performance. PPO's clipped surrogate objective function mitigates the issues of large policy updates that can destabilize learning, a common problem with vanilla REINFORCE. Furthermore, PPO generally outperforms REINFORCE in complex environments.
Given these advantages, we believe PPO provides a robust foundation for achieving high levels of performance in Craftax-classic. Consequently, we anticipate a decrease in performance if we were to substitute REINFORCE.
(2) You are right that in the worst case, our nearest neighbor tokenizer (NNT) can lead to memory overhead. However, we found that, on average, on Craftax-classic, only 2,304 codes (out of 4,096) are being used by NNT. | Summary: The authors propose a number of improvements to dreamer-style MBRL to achieve SOTA performance on the Craftax benchmark.
Claims And Evidence: The authors mention three main contributions:
- They show that training on both real and imagined data is better than solely on imagined data
- They embed image patches using nearest neighbors, while prior work embeds the entire image using a VQ-VAE
- They propose to use a block-teacher forcing objective instead of a standard log likelihood objective
They provide ample evidence to back their major claims.
The authors also make smaller claims I have a bit of trouble with.
> How-
ever, the near-deterministic nature of Atari games allows
agents to memorize action sequences without demonstrat-
ing true generalization (Machado et al., 2018)
Virtually all modern work on Atari uses the `frameskip` variants where this is not true.
> For the RNN, we find it crucial to ensure
the hidden state is low-dimensional, so that the memory is
forced to focus on the relevant bits of the past that cannot
be extracted from the current image
Is there any evidence for this claim? Just because a smaller hidden size works better does not imply this specific causation.
Methods And Evaluation Criteria: Craftax is an interesting, well-known, and difficult benchmark.
Theoretical Claims: They do not make theoretical claims
Experimental Designs Or Analyses: The authors meticulously ablate each and every component they introduce. I am quite happy with the depth of their experiments.
Supplementary Material: I did not read the supplementary material.
Relation To Broader Scientific Literature: The authors provide three improvements to dreamer-style MBRL. I believe other researchers focusing on such methods can integrate the authors' methods into their own models.
Essential References Not Discussed: I think they covered most important references.
Other Strengths And Weaknesses: The authors start from a simple existing model, and slowly build up tricks to reach a new SOTA. I enjoyed reading the paper, and I like that the authors ablate every single proposed change. Reaching SOTA on Craftax is a notable acheivement, as the task is open-ended and difficutl.
One major complaint I have is that the image codebook does not seem like a scalable approach for more complex problems. Yes, it works for small pixel observation spaces, but I imagine it would fail in more realistic tasks.
Other Comments Or Suggestions: > we fo-
cus on the Crafter domain (Hafner, 2021)
incorrect citation
Questions For Authors: Not using imagined trajectories for some $k$ timesteps seems like a clunky solution to the problem. Did you consider reweighting imagined sampled by some variable that anneals from 0.0 to 1.0 as training goes on? The drop in Figure 4 suggests a "hard" warmup could hurt training.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are pleased that you enjoyed reading the paper and that you think that reaching SOTA on Craftax is a notable achievement, on a well-known and difficult benchmark.
**Claims and Evidence:**
(1) You are correct in that Atari with sticky actions (frameskip) makes it stochastic. We will remove our claim of Atari being a deterministic environment.
(2) We acknowledge that we do not demonstrate a causal relation between the low dimension of the hidden state, and the fact that the policy captures control relevant information. However, we varied the ratio of the RNN state dimension to the CNN encoder dimension, and found that a lower ratio yielded better performance, a result that contrasts with prior work. We will clarify this point in our revised paper.
**Weaknesses:**
We anticipate our proposed nearest neighbor tokenizer (NNT) to only work in grid-world environments. In environments with raw pixel inputs (Atari, Procgen) we believe that alternative tokenization methods, such as VQ-VAE and variants, are likely necessary. We are currently exploring this direction for future publication.
**Questions:**
Regarding annealing, we have conducted some follow-up experiments where we progressively increased the number of policy updates on imaginary rollouts ($N_{\text{AC}}^{\text{iters}}$ in Step 4 of Algorithm 1) from 0 to 300. This annealing technique achieves a reward of $65.7$% $(\pm1.11)$, while removing the drop in performance observed when we start training in imagination at $200$k steps. We will include these results in the revised paper. | Summary: The authors propose a new model-based RL method that achieves SOTA at Crafter. The superiority of there method stems from a variety of novel insights:
- adding a memory with low-dimensional hidden states and passing both the image embedding and the memory output to subsequent networks
- training on a mix of real and imaginary trajectories
- encoding image patches via nearest neighbor
- use of block teacher forcing to train the model
Claims And Evidence: They perform an evaluation by incrementally adding there improvements on top of the baseline. There best model beats the baseline by more than 35% which is very significant.
Methods And Evaluation Criteria: They evaluate there algorithm on Crafter which is a good benchmark.
Theoretical Claims: N/A
Experimental Designs Or Analyses: They use 10 seeds which is standard in RL.
Supplementary Material: I have reviewed all parts.
Relation To Broader Scientific Literature: The authors build on the recent line of research on transformer world models and equip them with classical insights such as Dyna. Furthermore they propose a simpler alternative to VQ-VAE.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths: The paper explain precisely the different design choices they made and they perform an extensive ablation study.
Weakness: Even if crafter is a good environment, I would like to see one or two more environments to assess the generality of the method. I am happy to raise my score once I see result for another environment.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate your favorable comments regarding our work.
To further validate the robustness of our approach, we have conducted additional experiments on another grid-world environment MinAtar (https://github.com/kenjyoung/MinAtar), a set of four simplified Atari 2600 games. MinAtar contains symbolic binary observations of size 10x10xK, where K is the number of objects in each game, and binary rewards.
We first tuned our model-free RL agent on these environments, keeping the same architecture as described in our paper, with minor adjustments to the PPO hyperparameters. Second, we developed our model-based RL agent, building upon our previously proposed techniques: Dyna warmup, nearest neighbor tokenizer, and block teacher forcing. We retained the majority of the MBRL hyperparameters from Craftax-classic, with the following key modifications: (a) we increased the number of WM training steps $N_{\text{TWM}}^{\text{iters}}$ to from 500 to 2k, (b) we increased the number of policy updates in imagination $N_{\text{AC}}^{\text{iters}}$ from 150 to 2k, (c) we used patches of size 2x2xK (d) we added a weight of 10 to the cross-entropy losses of the reward and of the done states.
The table below compares the performance of our agents, averaged over 10 seeds after 1 million environment steps.
| Game | MFRL | MBRL |
|--------------|-----------------|-----------------|
| Asterix | $10.28 \pm 1.38$ | $47.24 \pm 10.12$ |
| Breakout | $76.36 \pm 1.82$ | $82.8 \pm 8.9$ |
| Freeway | $61.33 \pm 3.67$ | $70.73 \pm 0.33$ |
| SpaceInvader | $135.88 \pm 2.59$ | $180.7 \pm 3.34$ |
These results appear quite competitive compared to existing approaches. Notably, our MBRL agent, after only 1M environment steps, seems to outperform the Artificial Dopamine method reported in Guan et al. [1] after 5M steps (as illustrated in their Figure 4).
We are finalizing these experiments and plan to include them in the revised version of our paper.
**Reference:**
[1] Guan, Jonas, et al. "Temporal-Difference Learning Using Distributed Error Signals." Advances in Neural Information Processing Systems 37 (2024): 108710-108734. | null | null | null | null | null | null |
Deliberation in Latent Space via Differentiable Cache Augmentation | Accept (poster) | Summary: This work proposes a novel framework to augment an existing pretrained LLM with a differentiable cache module that can be finetuned on a set of data to improve performance of the model when its combined with aforementioned cache extender. Authors claim that the novel design of the module and look-ahead training objective allows the final model to perform reasoning in the latent space introduced by the cache processor. The cache processor operate by taking in the kv cache of a given prompt (x) input, and producing a sequence of latent embeddings that are appended to the existing representations computed by the frozen LLM model.
Experiments and analysis done with proprietary data and public benchmarks show that the combined model improves the performance compared to the frozen LLM itself, as well a few other methods known from previous work.
Claims And Evidence: * The idea of adding more trainable parameters to allow the model to gain extra performance improvements has clear reasoning behind it. For instance, deep seek v3 showed how multi token prediction with little parameter overhead gives significant improvements to the base model. Here we see how extra latent representation conditioned on the input is useful for predicting the output sequence.
* A big concern is lack of clarify about the "proprietary training and validation data". This weakens all experimental results as its unclear how that data is aligned with public benchmarks used in the experiments. Given that there exists lots of open source pretraining data (such as dclm), I don't see any benefit of using such data setup in a research paper aimed at sharing publicly in the conference.
* A big concern is lack of baselines showing how would a frozen model finetuned on exactly the same data perform on the benchmarks. Given that cache processor model shares the arch (more on this in the next point) with the main LLM, such baseline is required.
* Experiment descriptions do not mention much of details about the parameter capacity of cache processor w.r.t. the main model as well as competing methods such as LoRA. Its crucial to be shared as its one of the main drivers behind the model improvements: adding cache processor adds more parameters, so another baseline would be scaling up the frozen model with more params to match cache processor. While authors claim the efficiency of async cache processor abilities, its still a valid baseline that can serve as an upper bound of the performance.
Methods And Evaluation Criteria: Public benchmarks make sense and allows readers to understand the ballpark of the performance, and reproduce evaluations if needed.
Theoretical Claims: There are no theoretical claims with corresponding proofs in the manuscript.
Experimental Designs Or Analyses: The choice of training and validation data make it hard for readers to understand the influence of that data on the analysis of the method. This is one of the biggest concerns I have regarding this work.
Supplementary Material: Did not review supplemental materials, as I didn't notice any reference to that.
Relation To Broader Scientific Literature: Related method section is well written and address different alternative methods such as reasoning in discrete space with CoT, kv cache compression for model efficiency; those method address features of the proposed method from different angles.
Essential References Not Discussed: The presented discussion provides enough context for this work.
Other Strengths And Weaknesses: This method might open room for future work that'd train a number of such "expert" cache processors that can be trained separately on specific data domains to contribute towards the final mixture model.
Other Comments Or Suggestions: none
Questions For Authors: * I'd like to increase my score as long as authors would consider designing experiments using open sourced training datasets that are known to not be contaminated with any benchmark data. If thats not possible, would authors be able to confirm that they did extensive de-contamination of their data w.r.t. all benchmark tasks from table 2?
* Could you please include more information about the capacity (number of parameters) of cache processors that were trained in the experimental models?
* Could you please provide any baseline result showing the performance of the baseline/frozen LLM if it was finetuned using same data and same hyper-parameters?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer vZqn,
We sincerely thank you for your detailed review and insightful comments on our work (Submission 403). We appreciate you acknowledging the merit of our core idea, the quality of the related work section, and the potential for future work stemming from our method.
We understand your primary concerns revolve around the proprietary dataset used and the need for specific baselines and parameter details. We appreciate the opportunity to clarify these points.
### **Regarding Proprietary Data and Fine-tuning Baseline:**
We recognize your significant concerns regarding the clarity of the "proprietary training and validation data," its potential influence, and the corresponding lack of a baseline showing the frozen model fine-tuned on this same data. These points were raised multiple times and in your third question.
**Our Response:**
> We sincerely thank the reviewer for highlighting these concerns regarding our dataset and baselines. We understand that clarity here is crucial for evaluating our method's contribution.
> The dataset used to train the coprocessor is *a subset of the data the model was pretrained on*. We did perform the suggested experiment of further finetuning the base model on this same dataset subset. However, as the base model was already extensively optimized on this data distribution during its initial pretraining, we observed negligible differences in perplexity or downstream performance. Standard fine-tuning paradigms predominantly yielded diminishing returns when applied directly to the original pretraining corpus. Furthermore, consistent with typical large-scale pretraining data, the corpus we used consists of text for self-supervised next-token prediction and lacks curated instruction-following examples or specific question-answering formats as in downstream tasks.
> Given that this fine-tuning baseline showed no significant change from the original frozen model, we initially omitted it to maintain focus on comparisons highlighting the architectural contribution of the coprocessor. However, we understand the reviewer's point about baseline completeness. We will update the manuscript (e.g., in the dataset description or appendix) to clarify the nature of this dataset (as part of the original pretraining corpus, focused on self-supervision) and explicitly state the result of the fine-tuning baseline experiment, explaining why it yielded minimal change.
### **Regarding Parameter Counts and Scaled Baseline:**
Regarding your request for more details on parameter capacity and the suggestion of a scaled-up baseline, as asked in your second question:
**Our Response:**
> We thank the reviewer for the important question on parameter counts. In our main experiments, the coprocessor has the same number of parameters as the base LLM.
> While a baseline scaling the base model to ≈2x size is a relevant comparison point, our primary goal was specifically enhancing and augmenting an existing, frozen model, rather than the distinct and computationally intensive task of training a larger model from scratch, placing the scaled baseline outside this study's scope.
> Importantly, we also explored parameter efficiency using a LoRA variant, which added only ≈2% of the base model's parameters while still providing substantial performance gains.
> We will revise the paper to clearly state all relevant parameter counts (full coprocessor and LoRA variant) and briefly discuss the scaled baseline context.
### **Regarding Data Decontamination / Use of Open Data:**
Addressing your first question regarding data decontamination, the potential use of open-source datasets, and the condition for increasing your score:
**Our Response:**
> We appreciate the reviewer's specific condition for potential score improvement. While re-running our main experiments on large open-source datasets is infeasible within the rebuttal period (though valuable future work), we confirm that we performed extensive decontamination of our proprietary pretraining data specifically against all benchmark tasks reported in Table 2.
---
We hope these clarifications, the results from experiments already performed (such as the fine-tuning baseline), and our commitments to update the manuscript with further details (data description, parameter counts, decontamination confirmation) adequately address the main concerns raised. We thank you again for your constructive feedback and willingness to reconsider your score based on these clarifications.
Sincerely,
The Authors of Submission 403
---
Rebuttal Comment 1.1:
Comment: Thanks for addressing my comments and looking forward to see the changes you plan to add. I will increase my score. | Summary: This paper proposes a novel method that augment the memories (kv-caches) with a set of latent embeddings from auxiliary compressor modules. This offers two main advantages, end-to-end differentiability by using soft (continuous) tokens and asynchronous operation by using compressor in offline while freezing the base model. Without fine-tuning on downstream tasks, they showed good performance improvement across diverse tasks.
Claims And Evidence: The claims and their motivations make sense a lot. Especially, using extra modules, which can be used asynchronously in potential, can efficiently improve the performance through ‘latent thinking’.
Methods And Evaluation Criteria: Methods are well aligned to their motivations. Yet, I have some following concerns and questions:
- How about the training speed for these methods? KV-caches from frozen base models are reused in coprocessor, while coprocessor is being updated. So, it seems like we have to run additional forward step with frozen LLM for every samples. Though there could be advantages like no need to update some parameters related to KV caches, or reduced memory footprint for running coprocessor.
- The authors suggested these methods can be utilized asynchronously, but it would be good to elaborate inference process for it in Section A.5. If there are extra sampling steps in base model, which positions do we put latent embeddings?
- Have you tried to output latent embeddings sequentially, not in parallel?
- One disadvantage could be coprocessor should be the same size with base model due to reusing KV caches and augmenting soft embeddings. As this point can make these methods not to be scalable, further discussions for heterogeneous model sizes would be great.
- Is there any result that generate and use latent embeddings for multiple times during generation, not just a single forward call like in main experiments. I guess we have to adjust attention masks if we generate additional latent embeddings as latent embeddings do not attend each other in training settings. This could induce additional overhead during inference.
- What were the results when you train the models with 1 ahead token and cache augmentation techniques (regarding to Table 6 results)?
I cannot see any problems in evaluation criteria.
Theoretical Claims: N/A
Experimental Designs Or Analyses: - Analysis on computational costs would make these methods more validate.
Supplementary Material: I checked 9B experiment (larger sizes), LoRA fine-tuning for downstream tasks, scaling with data, different strategy for training coprocessor, and asynchronous experimental results.
Relation To Broader Scientific Literature: I believe this work has been related to recent latent reasoning research. Especially, this work can be classified to latent reasoning methods that use extra modules to augment thought tokens. CCOT and SoftCoT (see below references) seem well aligned with this paper’s research direction.
Essential References Not Discussed: Like above explanations, there are some concurrent works that use extra modules (LoRA or assistant LLM) to generate thought tokens. I’d just suggest the authors to cite these references in their paper.
[1] Compressed Chain of Thought: Efficient Reasoning Through Dense Representations
[2] SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: I’d be glad to discuss with the authors to raise my evaluation score. Please refer to some questions above.
One further question is how the authors implement parallel decoding parts efficiently. Is it possible to use FlashAttention mechanism for this non-causal attention masking?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer 9maT,
Thank you for the positive evaluation, thoughtful review, and examination of the supplement. We appreciate your constructive questions and address them below:
### **Regarding Training Speed/Process:**
You asked about the training speed and the forward passes involved.
**Our Response:**
> We thank the reviewer for the question regarding training speed. The training involves three main forward passes as outlined in Figures 1 and 2: (1) LLM processes input text to generate KV caches. (2) The coprocessor takes these KV caches to generate latent embeddings in parallel for all insertion points using an attention mask (avoiding extra forward steps per sample). (3) Latent embeddings are combined with the original text KV caches in the LLM to predict ahead tokens for loss calculation. Due to KV cache reuse, the effective sequence length involves the original length plus terms related to the number of latent and ahead tokens per insertion point. We will ensure this process is clearly described.
### **Regarding Asynchronous Inference Elaboration:**
You asked for elaboration on the asynchronous inference process and the placement of latent embeddings when extra sampling steps occur.
**Our Response:**
> Thank you for this important question about asynchronous inference. To clarify the process: If, for example, the base model performs N=2 asynchronous sampling steps immediately after the prompt while the coprocessor computes, the coprocessor uses the KV cache of the *original prompt only* (ignoring the N async steps) to generate latent embeddings in a single pass. These generated latents are then inserted *between* the original prompt's KV cache and the KV cache of the N=2 asynchronously sampled tokens before resuming further generation. We will add a detailed elaboration of this process to Appendix A.5 as requested.
### **Regarding Sequential Latent Embedding Generation:**
You asked if we tried outputting latent embeddings sequentially.
**Our Response:**
> We thank the reviewer for asking about sequential latent generation. We did not pursue this approach in the current work primarily due to the significant latency costs associated with sequential generation during both training and inference compared to our parallel method. However, we agree that exploring sequential latent deliberation is an interesting direction for future research, potentially offering different trade-offs.
### **Regarding Scalability and Heterogeneous Model Sizes:**
You raised a valid point about the potential disadvantage of the coprocessor needing to be the same size as the base model and asked for discussion on heterogeneous sizes.
**Our Response:**
> Adapting to different coprocessor/base model sizes (requiring KV cache adaptation like sub-sampling) needs further research. We acknowledge this limitation and will add discussion on scalability considerations for heterogeneous sizes.
### **Regarding Multi-step Latent Generation:**
You asked about generating and using latent embeddings multiple times during generation.
**Our Response:**
> Our training uses single-step parallel latent generation for efficiency, as multi-step dependencies increase training compute. Multi-step generation (where latents attend to prior latents) is compelling future work, especially for dialogue, though evaluating it requires infra changes beyond this rebuttal. We will discuss this possibility as future work.
### **Regarding results of 1-ahead + Cache Augmentation:**
You asked about the results when training with 1-ahead token prediction combined with cache augmentation.
**Our Response:**
> Thank you for this specific question regarding Table 6. When training with only 1 ahead token combined with cache augmentation, we observed only negligible improvements over the baseline model.
### **Regarding Suggested References:**
You suggested citing concurrent works like CCOT and SoftCoT.
**Our Response:**
> Thank you for suggesting CCOT and SoftCoT. We agree they are relevant and will cite/discuss them appropriately in the revised related work section.
### **Regarding Efficient Parallel Decoding Implementation:**
You had a further question on implementing the parallel decoding efficiently, specifically regarding FlashAttention.
**Our Response:**
> Thank you for the question about efficient implementation. While we use standard attention optimizations, specialized techniques like FlashMask offer a path for future efficiency gains. FlashMask is suited for our method's sparse non-causal masks, potentially improving speed/reducing overhead by skipping computation for masked connections (e.g., latent-to-latent).
---
We hope these responses adequately address your questions and comments. Thank you again for your insightful feedback and positive evaluation, which will help us improve the paper significantly.
Sincerely,
The Authors of Submission 403
---
Rebuttal Comment 1.1:
Comment: Thanks for the answers. I have the following comments:
- Regarding asynchronous inference, I understand the overall process and performance (A.5) of asynchronous inference. However, I feel a slight disconnect because the emphasis on asynchronous methods throughout the paper seems quite much (in Introduction) to the rather small experiment presented in the appendix. I fully acknowledge it's a significant strength of this framework, but the experimental support appears lacking. Is there any plan for supplementary experiments about this?
- Regarding the performance of "1-ahead + Cache Augmentation" being the same as the baseline, what could be the reason for this? I suspect it might be related to the length of the latent embedding. It seems like the width of the 16 latent embeddings is too much for the 1-ahead token. Is there any ablation study on the number of latent embeddings? (not requiring experiments at this point)
- Regarding the references, it was simply a suggestion as the methodologies that use external modules for latent embeddings seem very relevant, but no mention each other.
===========
Thanks for the clarification. All my concerns are resolved, and I raise my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thanks for your reply!
1. **Regarding Asynchronous Inference:** We agree that the capacity for async operation is an important strength of this work. However, the core technical contribution that needs to be validated is that the co-processor can be efficiently fit with a pre-training objective, on (plentiful) pre-training data, using the approach depicted in figure 2, rather than on task-specific data. This core technical contribution allows the co-processor to be large enough to do non-trivial latent reasoning. Therefore, our main experiments focused on showing the approach can scale. We do give the async evaluation in the appendix to show that async operation is possible with limited performance regressions, even with the co-processor not trained "async-aware". We do agree with the reviewer that additional work, for example re-running the co-processor training to optimize in an asynchronous-aware manner, would be interesting, but doing so before the end of the rebuttal period would require more computational resources than we currently have.
2. **Regarding 1-ahead + Cache Augmentation:** Sorry for misunderstanding your previous question – I thought you asked about 1 latent embedding, but you were asking about 1 ahead token. The results for 1 ahead and 2 ahead tokens on GSM8K are **22.90** and **23.65**, respectively, both higher than the baseline of **21.38** (results for larger numbers of ahead tokens are in Table 6). Our observation was that having a decent number of ahead tokens is important so the latent embeddings learn more than just predicting the very next token(s). However, using a very long lookahead seemed to harm prediction performance for the first few tokens immediately after the latent embeddings, which degraded sampling performance. We do have ablation studies on the number of latent embeddings in Figure 3, Table 1, and Table 2.
3. **Regarding References:** Thanks! We agree these are good references that we should include in our related work section.
We appreciate you engaging further and hope this addresses your follow-up comments. | Summary: In this work, the authors train a hyper-network (termed “coprocessor”) which takes a KV-cache from a language model mid-generation and produces a set latent embeddings which are appended to the KV-cache before producing the final answer. Critically, the coprocessor produces the latent embeddings with *a single forward pass* (unlike CoT decoding which requires decoding tokens step-by-step).
The main claims of this work are:
1. Cache augmentation enables lower perplexity on subsequent tokens.
2. When the cache is augmented the decoder achieves improved performance compared to CoT reasoning on a range of reasoning-intensive tasks.
3. Using the coprocessor is more efficient than using CoT decoding because CoT decoding is sequential and the coprocessor operates in parallel.
Claims And Evidence: Are the claims made in the submission supported by clear and convincing evidence? If not, which claims are problematic and why?
1. If the claim is about the comparison with a frozen model, this claim is well-supported by evidence (see Figure 3). However, I don’t think this is a fair baseline. The coprocessor saw over 200 billion tokens of training data from a “proprietary pretraining dataset” before evaluation on that same proprietary dataset. In contrast, the baseline did not see any of that data. Without any further information on that pretraining dataset, it is not clear to me whether the reduction in perplexity is due to the proposed methodology (i.e. the coprocessor), or just the fact that we’re training on from the same distribution as the test data. To be convinced that the benefits come from the specifics of having a coprocessor, I’d like to see how perplexity improves when we simply fine-tune the base model on the same pretraining data.
2. This claim is not very well-supported by evidence. Comparisons with CoT are only made on GSM-8K, a single reasoning task. To support the claim that cache augmentation actually improves performance w.r.t. CoT I would expect evaluations on more datasets.
3. This idea is not supported by strong empirical or theoretical evidence in the paper. The authors argue in the introduction and abstract that cache augmentation has efficiency benefits over CoT because the coprocessor can be run asynchronously while the model continues to generate. However, it’s not obvious to me that this could actual yield latency or throughput improvements in practice. At small batch sizes, language model generation is typically I/O-bound (the bottleneck is loading weights from HBM to the cores, not performing the compute). Since the coprocessor has its own weights, loading those weights will contend for bandwidth with the generation. As a result, the coprocessor and generator cannot fully overlap. At high batch sizes, we are typically compute bound (the cores are “full”), so adding more parallelism won’t help significantly. To convince me that the method provides speedup, I would expect to see empirical speed benchmarks or at least a theoretical cost model. Otherwise, I think this claim should be downplayed in the abstract and introduction.
Methods And Evaluation Criteria: Please see discussion of claims and evidence above.
Theoretical Claims: Not applicable.
Experimental Designs Or Analyses: Yes I did. Please see the discussion of claims and evidence above where I discuss the experimental design and analyses in the context of the paper’s claims.
Supplementary Material: I reviewed the entire supplement.
Relation To Broader Scientific Literature: In my view, the paper is most closely related to the hypernetwork literature. The authors do a great job situating the work in the context of the broader literature in the related work.
Essential References Not Discussed: None that I’m aware of. The authors provide a nice review of the relevant literature.
Other Strengths And Weaknesses: The method proposed is interesting and to my knowledge novel. However, due to some weaknesses in the evidence discussed above, it’s not clear to me how much of the benefit comes from the proposed method vs. the choice of specific training data.
Other Comments Or Suggestions: None.
Questions For Authors: Several works (e.g. https://arxiv.org/abs/2310.07923) have shown that the power of chain of thought actually comes specifically from the sequential nature of the computation. I suspect that on some tasks which require sequential computation (e.g. S5), CoT could outperform the parallel coprocessor method. Have you found any limitations or tradeoffs when it comes to the parallel nature of the coprocessor?
Ethical Review Concerns: Several works (e.g. https://arxiv.org/abs/2310.07923) have shown that the power of chain of thought actually comes specifically from the sequential nature of the computation. I suspect that on some tasks which require sequential computation (e.g. S5), CoT could outperform the parallel coprocessor method. Have you found any limitations or tradeoffs when it comes to the parallel nature of the coprocessor?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Dear Reviewer w2nr,
Thank you for the detailed review of Submission 403 and the constructive feedback. We're glad you found the method novel and the related work discussion good. We address your points below:
### **Regarding Claim 1 (Perplexity Reduction & Baseline Fairness):**
You questioned the baseline fairness due to training data and suggested a fine-tuning comparison.
**Our Response:**
> We thank the reviewer for their detailed feedback and insightful question regarding the training data and baseline comparisons. We recognize the importance of providing more clarity here.
> The dataset used to train the coprocessor is *a subset of the data the model was pretrained on*. We did perform the suggested experiment of further finetuning the base model on this same dataset subset. However, as the base model was already extensively optimized on this data distribution during its initial pretraining, we observed negligible differences in perplexity or downstream performance. Standard fine-tuning paradigms predominantly yielded diminishing returns when applied directly to the original pretraining corpus. Furthermore, consistent with typical large-scale pretraining data, the corpus we used consists of text for self-supervised next-token prediction and lacks curated instruction-following examples or specific question-answering formats as in downstream tasks.
> Given that this fine-tuning baseline showed no significant change from the original frozen model, we initially omitted it to maintain focus on comparisons highlighting the architectural contribution of the coprocessor. However, we understand the reviewer's point about baseline completeness. We will update the manuscript (e.g., in the dataset description or appendix) to clarify the nature of this dataset (as part of the original pretraining corpus, focused on self-supervision) and explicitly state the result of the fine-tuning baseline experiment, explaining why it yielded minimal change.
### **Regarding Claim 2 (Performance Improvement vs. CoT & Evaluation Scope):**
You noted the limited evaluation scope (GSM-8K only) for the CoT comparison.
**Our Response:**
> We appreciate this point. Our primary goal wasn't necessarily to universally outperform CoT, but rather to investigate parallel 'latent deliberation'. We compared against zero-shot CoT ('Let’s think step by step') because it enhances reasoning at inference time without task-specific demonstrations or fine-tuning, making it a comparable low-data approach similar to our use of general pretraining data. In our experiments, this prompt yielded near-baseline results on MATH/HumanEval; we highlighted GSM-8K because zero-shot CoT provided a notable improvement there, thus offering a meaningful comparison point against our method. We will add discussion clarifying this rationale, explicitly mention the MATH/HumanEval CoT results for completeness, and refine claims accordingly in the paper.
### **Regarding Claim 3 (Efficiency Benefits vs. CoT & Hardware Considerations):**
You questioned the efficiency claim due to lack of empirical evidence considering hardware limitations.
**Our Response:**
> Thank you for raising these critical efficiency points regarding I/O/compute bounds. Your concerns are valid under the assumption of shared hardware. However, our proposed architecture does not impose this constraint. The asynchronous coprocessor performs its single forward pass (akin to prefill) and can be deployed on **separate hardware** (e.g., a different accelerator or node). This fundamentally changes the efficiency analysis by decoupling its execution from the base model's subsequent token-by-token generation. This mitigates resource contention on the decoder's device and enables potential efficiency gains not possible if constrained to the same hardware.
### **Regarding your Question (Limitations/Tradeoffs of Parallel vs. Sequential Computation):**
You asked about the limitations of the parallel approach vs. sequential CoT.
**Our Response:**
> Thank you for this relevant point on CoT's sequential nature and the cited work. Our work takes a first step in exploring parallel latent deliberation. We agree it's plausible that tasks heavily reliant on iterative refinement, long causal dependencies, or where intermediate steps explicitly build upon each other might benefit more from sequential CoT. Our current parallel coprocessor doesn't explicitly model such sequential dependencies within the latent injection. This represents a potential limitation compared to sequential methods, which we will acknowledge and discuss in the revised paper. Investigating sequential latent deliberation is an important avenue for future work.
We hope this addresses your concerns and thank you again for the valuable feedback to improve the paper.
Sincerely,
The Authors of Submission 403 | Summary: In this work a co-processor is trained to get as input the generated KV-cache of a frozen model - after given an input x and a set of soft tokens and produce a set of latent embeddings z. These embeddings are appended to the KV-cache (augmentation) and the original frozen model decodes towards output y (generation with augmented context).
The co-processor is trained by allowing it to first generate a number of latent representation and only then produce a predetermined number of output tokens for a training loss to be computed (starting from predetermined positions). Thus augmented models exhibit lower perplexity and increased performance across a broad selection of tasks, which typically increases when allowing the model to generate more latent embeddings.
This method, which allows asynchronous operations and sports end-to-end differentiability, compares favorably to closely or remotely similar schemes (Pause Token, zero-shot COT). The co-processor is typically initialized as a twin model but can also be trained from scratch or through LoRA (and the latter although incurring a performance hit is a very attractive option).
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes, all.
Relation To Broader Scientific Literature: Most relevant: further explorations of latent space; Coconut and Pause Token papers are close along this dimension.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: This is a very timely contribution because decoding towards tokens could be seen as a kind of quantization, so exploring the continuous latent space more could be very advantageous towards finally producing tokens of "higher" quality.
Training seems more complex, there are more and different hyper-parameters to explore (positions, how many latents, how many tokens to generate). This will translate to training overhead.
Other Comments Or Suggestions: - It would be nice to include a comment on the overhead of generating more embeddings (time) or on the fact that there are two, rather than one models in use (space) - at inference time.
- Coconut is also a recent similar approach in the sense that the continuous latent space is explored more before the materialization of tokens. The reader would expect a more detailed comparison - basically as started towards the end of page 7.
Questions For Authors: Could you position the results obtained here in math reasoning tasks and the training method used to reach those as compared to recently re-popularized RL techniques that attain impressive results for this type of tasks? Is there room for synergy? Or these views are totally orthogonal?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer n6p1,
Thank you very much for your positive evaluation and your insightful review of our work. We greatly appreciate your accurate summary, positive feedback on our claims and experiments, and for reviewing the supplementary material thoroughly. We are encouraged that you see this as a timely contribution.
We would like to address your specific comments and question:
### **Regarding Inference Overhead (Time/Space):**
You suggested including comments on the overhead related to generating embeddings and using two models at inference time.
**Our Response:**
> Thank you for raising this important point about inference overhead. Regarding space, while our main setup uses a coprocessor similar in size to the base model, we also demonstrated strong results with a parameter-efficient LoRA variant adding only ~2% extra parameters (as discussed in the supplement). Regarding time, our method adds only one extra forward pass through the coprocessor during inference. Crucially, this pass can operate asynchronously, potentially hiding much of its latency, as discussed regarding deployment scenarios. Furthermore, KV cache reuse limits the computational overhead of this pass. We will add a note discussing these inference time and space considerations more explicitly in the revised manuscript.
### **Regarding Comparison with Coconut:**
You suggested a more detailed comparison with the Coconut paper.
**Our Response:**
> Thank you for the suggestion to expand the comparison with Coconut. While both works explore continuous representations, key differences make direct comparison challenging. Coconut focuses on training an LLM to generate sequential 'continuous CoT' embeddings, often fine-tuning on specific downstream tasks. In contrast, our method uses a separate coprocessor to generate *parallel* latent embeddings that augment a *frozen* base LLM, and our coprocessor is trained on general pretraining data, aiming for broad applicability without task-specific tuning. Given these methodological distinctions and the near-simultaneous appearance of the works (Dec 2024), a direct empirical comparison is difficult. However, we will expand the discussion in the related work section to better highlight these conceptual similarities and differences as requested.
### **Regarding Relation to RL Techniques for Reasoning:**
You asked how our results/method for math reasoning compare to recent RL techniques and if there's potential synergy.
**Our Response:**
> Thank you for this insightful question regarding RL techniques for reasoning. Recent RL methods, particularly those using process supervision or outcome rewards, have shown impressive results, often by fine-tuning models to generate better reasoning steps or verify solutions. Our approach is largely orthogonal, focusing on enhancing the base model's capabilities through parallel 'latent deliberation' via the coprocessor, trained on general pretraining data without task-specific reward signals. However, we see potential for synergy: RL techniques could potentially be applied *on top* of our augmented model to further refine the final reasoning output or even guide the generation process. Alternatively, RL could be explored as a method to train the coprocessor itself, though this adds significant complexity. We view these as complementary approaches rather than competing ones.
---
Thank you again for your valuable feedback and positive assessment of our work. We believe addressing these points will further strengthen the paper.
Sincerely,
The Authors of Submission 403 | null | null | null | null | null | null |
Catch Your Emotion: Sharpening Emotion Perception in Multimodal Large Language Models | Accept (spotlight poster) | Summary: This paper proposes a method called Sharpening Emotion Perception in Multimodal Large Language Models (SEPM) to improve emotion recognition in MLLMs. It addresses two challenges: confusion between semantically similar emotions and the visual redundancy that distracts from emotional cues. SEPM incorporates a two-stage inference process and focuses on emotion-related visual cues, enhancing classification accuracy. The approach is training-free, requiring no additional manual annotations or fine-tuning, and significantly improves MLLM performance on emotion tasks, offering a resource-efficient and scalable solution for emotion recognition.
Claims And Evidence: YES
Methods And Evaluation Criteria: Yes, the proposed method can effectively enhance the model's ability in sentiment reasoning.
Theoretical Claims: The approach is logically clear and the ideas are innovative.
Experimental Designs Or Analyses: The experimental results are comprehensive. The method has been tested on different datasets, and the results are shown to be reliable through ablation studies and visualization experiments.
Supplementary Material: The author provides the algorithm code, which is helpful for understanding the method pipeline and the details of the related techniques.
Relation To Broader Scientific Literature: The author references the latest related work and proposes a method that takes a different approach compared to existing studies.
Essential References Not Discussed: None
Other Strengths And Weaknesses: Pros:
SEPM improves emotion recognition in MLLMs without requiring additional training or manual annotations, making it a resource-efficient and scalable solution for real-world applications.
Cons:
- The Focus-on-Emotion prompt designed in the paper leads to significant performance improvements. I'm curious to know if this prompt is the only one that works, or if changing the prompt would have an impact on performance.
- In theory, removing more visual information could mean losing important emotional cues needed for the model to make accurate predictions, but this is not clearly reflected in the paper's sensitivity analysis.
- The description of the selection process based on the confidence threshold in the first stage is too simplistic in the paper.
- The paper contains some inappropriate expressions.
Other Comments Or Suggestions: The paper mentions that the model is highly sensitive to different prompts. I hope future work can explore this aspect further.
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Dear Reviewer W74m:
We are deeply grateful for your positive feedback on our work and the insightful suggestions. We have carefully reviewed each point and provided detailed responses accordingly.
**Q1: Performance under different prompts** (Other Strengths And Weaknesses)
To explore the performance under different prompts, we designed a prompt augmentation semantically similar to Focus-on-Emotion: *Identify the predominant emotion conveyed*, and conducted corresponding experiments. As presented in the table, the performance of the new prompt is noticeably inferior to that of Focus-on-Emotion. We hypothesize that more direct prompts tend to yield better results. We plan to further explore the impact of various prompt designs on performance in future work.
*Table: Ablation experiments on different prompts*
| Prompt type | Drop rate = 0.1 | Drop rate = 0.2 |
| :---------------------------------------: | :-------------: | :-------------: |
| Identify the predominant emotion conveyed | 50.53 | 50.77 |
| Focus on emotion | **56.04** | **56.24** |
**Q2: More analysis on removing visual information** (Other Strengths And Weaknesses)
Compared to general tasks, visual redundancy in emotion-related tasks is more likely to interfere with the inference bias. Therefore, dropping a small amount of the least relevant visual information can help improve the model’s performance. In Figure 3 of the original paper, we conduct a sensitivity analysis under different drop rates and observe that discarding a small amount of redundant information leads to performance gains. However, excessive dropping results in the loss of some informative content, causing a decline in performance to some extent. Additionally, Figure 4 of the paper visualizes the specific content being dropped, showing that at lower drop rates, the removed tokens are mostly irrelevant, which allows the model to focus more on salient information and enhances its reasoning capability.
**Q3: The confidence-based prompt selection process in the first stage** (Other Strengths And Weaknesses)
Sorry for the confusion. To encourage the model to focus on relatively simpler problems at each stage, we introduce prompt enhancement for fine-grained inference based on the outcomes of coarse-grained inference. To mitigate the risk of error propagation from the first stage adversely impacting the second-stage inference, we assess the confidence of the model using the variance of the output logits, rather than relying solely on predicted labels. Specifically, when the variance among logits is high (indicating strong confidence), we apply direct positive/negative prompt augmentation. Conversely, when the variance is low (suggesting uncertainty), we incorporate ambiguity-related descriptions to characterize the model decision process in the first stage, which are then used to guide the second stage. The selection of the variance threshold is discussed in the experimental section, where the effectiveness of the proposed strategy is demonstrated.
**Q4: Inappropriate expressions** (Other Strengths And Weaknesses)
We will revise the inappropriate expressions in the final version. Thanks for your suggestions. | Summary: This paper proposes SEPM to tackle emotion recognition challenges in multimodal models. It focuses on issues like confusing similar emotions and visual noise. SEPM introduces a two-stage inference process: a coarse-to-fine approach to improve confidence in emotion classification and a focus on relevant emotional cues to reduce visual redundancy. The method improves performance without needing extra training, offering a scalable and efficient solution for emotion recognition tasks.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes. The method is well evaluated.
Theoretical Claims: The theory used in the paper is clear and easy to understand.
Experimental Designs Or Analyses: The experimental results are comprehensive.
Supplementary Material: The code provided by the author helps to better understand the method.
Relation To Broader Scientific Literature: The author introduces an innovative method that enhances the sentiment reasoning ability of MLLM through a two-stage inference process and visual augmentation.
Essential References Not Discussed: NaN.
Other Strengths And Weaknesses: Strengths:
1.The method is novel, and the experiments are comprehensive.
2.The paper is easy to understand.
Weaknesses:
1.The paper does not clearly specify how the attention map is obtained in the method, and it would be helpful to provide more details on this.
2.The experiments are relatively thorough, but the ablation study includes fewer metrics. It would be helpful to provide more information.
3.I would like to see an analysis of the inference efficiency.
Other Comments Or Suggestions: None.
Questions For Authors: Refer to weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer 6rbK:
Thank you very much for your valuable comments and constructive feedback. Below, we carefully respond to each of your concerns point-by-point, providing detailed explanations and supplementary evidence to further clarify our approach and demonstrate its effectiveness.
**Q1: Attention map computation** (Other Strengths And Weaknesses)
Sorry for any inconvenience caused. Within the transformer architecture of MLLM, each block generates an attention map that reflects the model focus at that layer. Notably, attention maps from deeper layers tend to encode more abstract, high-level semantic representations, aligning more closely with the model learned conceptual understanding of the input. Therefore, we use the attention map from the final transformer block as the basis for token dropping in the second stage. Additionally, to enhance the accuracy of emotional focus, we retain only the attention maps between visual tokens and the Focus-on-Emotion prompt.
**Q2: More comprehensive ablation experiment** (Other Strengths And Weaknesses)
Since we perform a classification task using an MLLM, the metric Accuracy serves as an appropriate indicator of performance. We conducted further ablation studies on the EmoSet dataset. As shown in the table, the effectiveness of each module is further demonstrated. More details will be provided in the final version.
*Table: Ablation experiments on additional datasets*
| CCI | FP | VTA | Emotion6 (ACC) | EmoSet (ACC) |
| :----------: | :----------: | :----------: | :------------: | :----------: |
| | | | 48.32 | 52.77 |
| $\checkmark$ | | | 51.68 | 53.98 |
| | $\checkmark$ | | 51.52 | 54.28 |
| | | $\checkmark$ | 53.03 | 56.10 |
| $\checkmark$ | $\checkmark$ | $\checkmark$ | **54.04** | **56.24** |
**Q3: Discussion on inference efficiency** (Other Strengths And Weaknesses)
Although the two-stage inference process introduces a certain increase in inference time, the overall inference latency for emotion recognition tasks remains low, due to the relatively lightweight nature of the task, making this additional overhead negligible in practical use. Additionally, we effectively reduced part of the inference time by discarding tokens. Overall, the slight extra time cost results in improved inference performance.
---
Rebuttal Comment 1.1:
Comment: I think all the concerns have been addressed by the authors. I will maintain the score. | Summary: This paper presents Sharpening Emotion Perception in MLLMs (SEPM), a training-free method to enhance emotional reasoning in multimodal large language models. SEPM improves emotion classification by using a Confidence-Guided Inference framework and Focus-on-Emotion Visual Augmentation to reduce distractions. Experimental results show significant performance improvements in emotion-related tasks, offering a scalable, resource-efficient solution.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes. The method is relatively applicable.
Theoretical Claims: The paper presents a simple and easy-to-understand approach, but provides limited discussion on the theory.
Experimental Designs Or Analyses: The experimental results are comprehensive. The author conducts various experiments with different relative method in various downstream tasks.
Supplementary Material: The author has uploaded the experimental code, which is useful.
Relation To Broader Scientific Literature: Based on the issues in existing work, the author proposes a new approach.
Essential References Not Discussed: There is none.
Other Strengths And Weaknesses: Pros:
• The paper is well-structured and clearly written.
• SEPM is able to enhance emotion recognition in MLLMs without requiring additional training or manual annotations.
Cons:
• The generalizability of SEPM is not fully established, as its performance and suitability for other multimodal tasks have not been adequately validated, indicating the need for additional experiments and data validation.
• Why does the acc in Fig. 3 show little change with \beta in on the EmoSet dataset? Additionally, how do larger \beta values affect the results? Further discussion is needed.
• The meaning of Eq. (6) is unclear; further explanation would be appreciated.
• The numbers on the right side of the dashed line in Fig. 6 are too small, resulting in poor readability.
Other Comments Or Suggestions: It is suggested to correct the minor errors in the paper.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer jWJJ:
Thank you again for your thoughtful and constructive suggestions. In the following responses, we address each of your points thoroughly, providing additional explanations and supporting evidence to strengthen the clarity of our methods.
**Q1: Discussion on generalizability of SEPM** (Other Strengths And Weaknesses)
Thank you for highlighting this issue. It should be clarified that 1) the two-stage inference process in our method is specifically designed to address the progressive categorization from coarse-grained to fine-grained levels inherent in emotion recognition tasks, and 2) the visual enhancement component of our method is motivated by the interference caused by redundant visual information in emotion-related tasks. Consequently, both modules are specifically tailored for emotion recognition and are not directly transferable to other tasks. Nonetheless, we have explored applying the Focus-on-X prompt to other domains and conducted experiments on a medical dataset (VQA-RAD [1]). As shown in the table, models across various architectures all exhibited improved performance, indicating that Focus-on-X can effectively serve as a generalized prompt enhancement.
*Table: Generalizability experiment on the VQA-RAD Dataset (Medical Domain)*
| Dataset | LLaVA-7b | VILA-8b |
| :--------------: | :-------: | :-------: |
| Zero-shot | 37.47 | 38.58 |
| Focus-on-Medical | **39.25** | **40.58** |
**Q2: Experiment on higher drop rate** (Other Strengths And Weaknesses)
We conducted ablation experiments with higher drop rates (beta), as shown in the table below. As the drop rate increases, the performance of the model declines to some extent but still maintains a certain inference capability. We believe discarding the least relevant redundant information helps the model better focus on emotion-related content, thereby improving performance. However, as the drop rate continues to increase, the model begins discarding some moderately important information, leading to a reduction in inference accuracy. Nonetheless, since the most critical emotional information is preserved, the model's performance does not completely collapse.
*Table: Ablation experiment on drop rate*
| Drop rate | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 |
| :-------: | :---: | :-------: | :---: | :---: | :---: | :---: | :---: | :---: |
| ACC | 56.04 | **56.24** | 55.97 | 55.96 | 53.95 | 53.82 | 53.81 | 52.51 |
**Q3: Lack of clarity in Equation 6** (Other Strengths And Weaknesses)
Sorry for the confusion. We have revised the relevant part of Equation 6:
Then we extract the FoE prompt set as emotion-related text, leveraging the query-dimension of textual logits and the key-dimension of the visual modality to construct the drop matrix $P \in \mathbb{R}^{L_t \times L_v}$, where $L_t$ and $L_v$ are the lengths of the text tokens (FoE) and visual tokens, respectively. The drop matrix $P$ is defined as:
$$
P[i, j] = \mathcal{A}(i, j)
$$
$i \in$ { $x \mid \mathbb{I}[x] \in \mathbb{T}$ }, $\quad j \in$ { $y \mid \mathbb{I}[y] \in \mathbb{V}$ },
where $\mathbb{I}$ denotes the set of all input tokens, including both textual and visual modalities, $\mathbb{T}$ denotes FoE prompt token set and $\mathbb{V}$ is visual tokens set, i.e.
$$
(\mathbb{T} \cup \mathbb{V}) \subseteq \mathbb{I}
$$
We will update the above content in the final version.
**Q4: Problem on Figure 6** (Other Strengths And Weaknesses)
Thank you for tips. We will update in the final version.
[1] A dataset of clinically generated visual questions and answers about radiology images, Scientific data, 2018
---
Rebuttal Comment 1.1:
Comment: The authors have addressed most of the concerns, and the supplement to the methods and experiments is clear. I am inclined to accept. | Summary: This paper proposes a training-free approach for emotion classification using Multimodal Large Language Models (MLLMs). They find that MLLMs (1) struggles to distinguish between semantically similar emotions, and (2) are overwhelmed by redundant visual information. To address these challenges, they propose a Coarse-to-Fine inference framework to refine emotion classification and a Focus-on-Emotion Viusal Augmentation approach to reduce visual redundancy. Experimental results on multiple benckmarks demonstrate the effectiveness of the proposed method.
## update after rebuttal
I read authors' response and other reviewers' comments. I appreciate the clarifications and new results, which have fully addressed my concerns. Therefore, I will raise my score.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: no
Experimental Designs Or Analyses: no
Supplementary Material: No attached supplementary material.
Relation To Broader Scientific Literature: This paper propose a training free approach for emotion perception using MLLMs, offering a new well-performed solution. However, one of the core contributions of this paper, i.e., Focus-on-Emotion Visual Augmentation, as the author claimed, has been proposed in prior work ([**SPARSEVLM'25**](http://export.arxiv.org/pdf/2410.04417)), which focus on more common VQA tasks. Although they achieves best overall results in emotion perception task, the contributions of this paper are less significant to the community under a broader background.
Essential References Not Discussed: no
Other Strengths And Weaknesses: strengths
1. This paper propose a training-free approach for zero-shot emotion perception using MLLMs, providing an effective and efficient solution in this area.
2. Experimental results on multiple benchmarks and extensive diagnostic analysis demonstrate the effectiveness.
3. The writing is accessible, making the paper easy to follow.
weaknesses
1. The innovations of this paper is concerning. In particular, the proposed Focus-on-Emotion Visual Augmentation is methodologically idential to the adaptive sparsification in [**SPARSEVLM'25**](http://export.arxiv.org/pdf/2410.04417), which employ the same approach to drop irrelavant visual tokens.
2. The author does not compare their method to those in the emotion-related field (e.g., [**SoV'24**](https://arxiv.org/pdf/2410.02244), which is also zero-shot), making the evaluation less convinced. At least, the rationale behind this choice should be clarified.
Other Comments Or Suggestions: 1. I recommend the authors further improve the methodology and highlight the innovations.
2. More methods could be included for comparison to strengthen the comprehensiveness of the evaluation.
Questions For Authors: see weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer wn8T:
We sincerely appreciate your time and effort in reviewing our paper. We have provided further clarification on each of the issues you raised. We hope the detailed responses below fully address your concerns, and we would be grateful if you would consider updating your score.
**Q1: Different with SPARSEVLM & innovations** (Relation To Broader Scientific Literature & Other Strengths And Weaknesses & Other Comments Or Suggestions)
Thank you for raising this valuable question. Compared to general tasks, emotion recognition is often susceptible to interference from redundant visual information (noise), leading to inference biases. While SparseVLM accelerates inference for general tasks by retaining only the most crucial information in the visual modality, our method, in contrast, designs a specialized Focus-on-Emotion prompt, effectively identifying emotion-related visual redundancy through prompt-guided relevance. By discarding such redundant information, we can significantly enhance the accuracy of the model in emotion inference. Additionally, in emotion recognition tasks with a large number of categories, semantic similarities between emotions often lead to confusion. Therefore, we introduce Confidence-Guided Coarse-to-Fine Inference, enabling the model to handle simpler subtasks incrementally, thereby improving overall performance.
**Q2: Reason for not comparing with other emotion-related methods** (Other Strengths And Weaknesses)
To the best of our knowledge, we are the **first** to explore training-free optimization in general emotion recognition tasks. Therefore, we do not identify suitable emotion-related methods for a fair and meaningful comparison. Although the SoV [1] method mentioned in the paper is also training-free, it is specifically designed for facial emotion images and requires additional manual bounding box annotations. Moreover, since its code has not been released, we do not have the capacity to adapt it to our model architecture and more general emotion recognition tasks in a short period of time. We will try to include a comparison with SoV in the final version, and will continue to follow similar work in the future. Thank you for your valuable suggestion.
**Q3: More comparative experiment** (Other Comments Or Suggestions)
In the absence of suitable emotion-related baselines for direct comparison, we introduced a comparative experiment with PDrop [2] based on its official code, which aims to eliminate visual redundancy to optimize the model. The experimental results in the table show that PDrop yields a slight improvement over the zero-shot setting by mitigating visual redundancy, supporting the hypothesis that emotional visual redundancy may interfere with the judgment of the model. Moreover, the results highlight the superior effectiveness of our proposed method.
*Table: Comparison with PDrop on various emotion datasets.*
| Dataset | Emotion6 | EmoSet | WebEmo7 | WebEmo25 | Abstract | Average |
| :---------: | :-------: | :-------: | :--------: | :---------: | :-------: | :-------: |
| LLaVA | 48.32 | 52.77 | 25.56 | 15.71 | 27.86 | 34.04 |
| PDrop | 51.24 | 53.53 | 27.50 | 16.02 | 27.50 | 35.16 |
| SEPM (ours) | **54.21** | **56.04** | **42.39** | **18.26** | **29.29** | **40.04** |
**Thank you for your valuable advice. We are deeply grateful and appreciate your guidance.**
[1] Visual Prompting in LLMs for Enhancing Emotion Recognition, EMNLP, 2024
[2] PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction, CVPR, 2025 | null | null | null | null | null | null |
ExLM: Rethinking the Impact of $\texttt{[MASK]}$ Tokens in Masked Language Models | Accept (poster) | Summary: This paper investigates the role of mask tokens in Masked Language Models (MLMs). The authors first provide an empirical examination of the effect of the mask token through two perspectives: corrupted tokens and unreal tokens. Additionally, the authors propose a new algorithm, EXLM, to further enhance performance. The experimental evaluation demonstrates the effectiveness of the proposed method.
Claims And Evidence: 1. The authors claim that both unreal tokens and corrupted tokens arising from the mask token can affect performance. While the impact of corrupted semantics has been studied in previous literature (Line 65), the effect of unreal tokens does not appear to be significant for model performance (Fig. 3). This, in turn, underscores the importance of prior work. Given this, what is the authors' main contribution in analyzing unreal tokens? Can the author elaborate this?
2. The connection between Section 3 and Section 4 is weak. The proposed method EXLM seems not motivated by the analysis given in Section 3. What is the motivation behind the proposed method, EXLM? At the end of Section 3, the authors appear to focus on studying the optimal mask ratio, but this aspect is not discussed in Section 4.
Methods And Evaluation Criteria: 1. The baseline approaches seem too weak as they were proposed a few years ago.
2. The proposed method, EXLM, incorporates 2D RoPE, which further enhances model performance, as shown in Table 2. Therefore, the performance gain is not solely attributable to EXLM, making it difficult to assess the true effectiveness of the approach
Theoretical Claims: NA
Experimental Designs Or Analyses: No Question
Supplementary Material: I didn't review the supplementary material.
Relation To Broader Scientific Literature: Please see *Claims And Evidence*
Essential References Not Discussed: NA
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: 1. As indicated in Section 3.3 (Core Impact of Corrupted Semantics: Multimodality), Line 212, the corrupted context may correspond to multiple possible semantics. Can authors explain how EXLM addresses this issue?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the insightful suggestions from Reviewer j1MR. In the following sections, we will address all your concerns. These discussions will also be incorporated into the final camera-ready version of the paper. Any further comments are welcome!
---
**Q1: What is the authors' main contribution in analyzing unreal tokens?**
Thank you for the question. The repeated MLM experiments are primarily designed to analyze the key factors that influence the performance of MLM models and the underlying reasons. Toward this goal, this section presents the following two main contributions:
1. We demonstrate that the issue of *unreal tokens*, which has received significant attention in previous works, is not the primary factor affecting MLM performance.
2. We show that *corrupted semantics* and the resulting *multimodality* phenomenon are more critical in influencing the performance of MLMs.
In summary, this analytical section offers a new perspective for understanding MLMs by highlighting the previously overlooked importance of corrupted semantics. It also suggests that future developments of MLMs should focus more on addressing the problem of multimodality.
---
**Q2: The connection between Section 3 and Section 4 is weak.**
Thank you for the suggestion. We have revised Sections 3 and 4 to better highlight their connection. These changes will be included in the final camera-ready version. Specifically:
1. The last paragraph of Section 3.3 has been updated to emphasize *multimodality* as a key factor affecting MLM performance. An effective model should thus be able to *mitigate the impact of semantic multimodality*.
2. Section 4.1 has been reorganized to align with Section 3’s analysis. To address multimodality, ExLM introduces two mechanisms:
- **Intra-token Multimodality**: Each missing token may have a diverse set of plausible candidates. Thus, we propose *States Expansion* to build a larger semantic space, enabling richer token predictions.
- **Inter-token Multimodality**: The meaning of a token is intricately dependent on the semantics of its surrounding tokens. Thus, we introduce a *Dependency Capture* mechanism using a transition matrix to model semantic dependencies across tokens.
---
**Q3: How does ExLM model multiple possible semantics?**
We introduce a *state expansion* mechanism in ExLM, where each [MASK] token is associated with multiple expanded states, each representing a different possible semantic choice for that token.
---
**Q4: The baseline approaches seem too weak as they were proposed a few years ago.**
Thank you for the valuable suggestion. We have added comparisons with a more recent baseline model, **3ML_{self}** [Liao et al., 2022], in Table 3. The results show that our ExLM model outperforms this stronger baseline, as detailed below:
| | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | RTE | MRPC | STS-B | MEAN |
|-|-|-|--|-|-|-|-|-|-|
| TUPE [Ke et al., 2021] | 86.2/86.2 | 91.3 | 92.2 | 93.3 | 63.6 | 73.6 | **89.9** | 89.2 | 84.9 |
| 3ML_{self} [Liao et al., 2022] | 84.8/84.9 | 91.1 | 91.4 | 92.9 | 61.4 | **81.2** | 89.2 | 90.1 | 85.2 |
| ExLM | **86.9/86.7** | **92.0** | **93.1** | **93.9** | **64.6** | 78.8 | 89.6 | **90.5** | **86.2** |
In addition, we further evaluated the ExLM model on the **SuperGLUE benchmark** [Wang et al., 2019]. The following table shows that ExLM also achieves significant improvements over several strong baselines, further confirming the effectiveness of our approach:
| Model | BoolQ (Acc.) | CB (Acc.) | COPA (Acc.) | MultiRC (F1) |
|-|-|-|-|-|
| BERT | 74.4 | 83.9 | 63.0 | 68.1 |
| Token Drop [Hou et al., 2022] | 73.0 | 83.9 | 64.0 | 67.7 |
| SCTD [Zhong et al., 2023] | 73.8 | 87.5 | 68.0 | 68.9 |
| ExLM | **76.7** | **88.0** | **69.1** | **71.3** |
---
**Q5: The proposed method, ExLM, incorporates 2D RoPE, which further enhances model performance, making it difficult to assess the true effectiveness of the approach.**
Thank you for pointing this out. In our experiments, all compared MLM baselines, including Vanilla MLM and Vanilla MLM++, adopt the same backbone architecture as ExLM, which uses RoPE as the default positional encoding. Therefore, the performance comparisons are made on a fair basis and can indeed assess the true effectiveness of the proposed method.
**References:**
[Hou et al., 2022] Hou, Le, et al. "Token Dropping for Efficient BERT Pretraining." ACL 2022.
[Zhong et al., 2023] Zhong, Qihuang, et al. "Revisiting Token Dropping Strategy in Efficient BERT Pretraining." ACL 2023.
[Wang et al., 2019] Wang, Alex, et al. "SuperGLUE: A stickier benchmark for general-purpose language understanding systems." NeurIPS 2019.
[Liao et al., 2022] Liao, Baohao, et al. "Mask More and Mask Later: Efficient Pre-training of Masked Language Models by Disentangling the [MASK] Token." EMNLP 2022.
[Ke et al., 2021] Ke, Guolin, Di He, and Tie-Yan Liu. "Rethinking Positional Encoding in Language Pre-training." ICLR 2021. | Summary: This paper studies the semantic corruption issue in masked language modeling (MLM). To start the research, the authors design an experiment (repeated MLM) to show the relationship and significance of the corrupted semantics caused by masking. As a result, ExLM is proposed as a solution for the problem. In this LM, a couple of masked tokens are distributed to each masked position. The hidden states are then aligned using a transition matrix, then computing the final losses.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: 1. The first part of the experiment show great performance gain on learning the language representation.
2. However, the second part of the experiments are evaluated on a selection of GLUE and SQuAD2 dev sets. However, to my experiences, these dev sets are potentially biased to the official test sets, and the results are unstable. Considering that the performance gains showed in Table 3 are not very pronounced (e.g. 64.3 to 64.6 on CoLA), **I need to point out this potential risk that this part of the evaluation can be problematic.**
3. The repeated MLM results are reported on MNLI, a frequently used dataset for NLU. Even though the results are not pronounced, I trust the results. The problem is I think only experimenting MNLI is not sufficient for the claim. **At least one more dataset results (e.g. QQP, RTE) are needed to fully justify the claims** (e.g. corrupted semantics and unreal tokens which matter).
Theoretical Claims: I did not check the proof in Appendix.
Experimental Designs Or Analyses: The experiments are run across a number of random seeds. They are fair.
Supplementary Material: No supplementary material provided.
Relation To Broader Scientific Literature: The method proposed in the paper can be useful for training MLMs.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Weakness:
1. The paper states that the ExLM method is efficiently in modeling semantics. I agree with that but I do care about **the computation efficiency of ExLM compared to MLM and other MLM variants, e.g. time cost, memory cost. This point is very important for the usage of the method, and it seems not discussed in the paper.** If they are already provided somewhere, please remind me.
2. The details of repeated MLM experiments are not clear for me. The authors mention that they train a number of MLMs with different repetition times k and mask ratios p. The results in Figure 3-5 are on the MNLI dataset. I am not sure whether they pre-training the MLM (e.g. on Wiki) and then fine-tune on MNLI, or just fine-tune on MNLI. In my opinion, these two approaches can cause very different results in the analysis.
Other Comments Or Suggestions: Please see above.
Questions For Authors: Please see above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We express our gratitude to Reviewer 3GXq for the suggestions. In the following, we address all your concerns regarding the evaluation of NLU tasks, additional experiments on more tasks, the efficiency analysis of ExLM, and the details of repeated MLM experiments. These discussions will also be incorporated into the final camera-ready version of the paper. We hope our responses help clarify the paper, and we welcome any further comments or suggestions!
---
**Q1: These dev sets are potentially biased to the official test sets, and the results are unstable.**
**A:** Thank you for your insightful comment. We have evaluated the stability of the ExLM model on the GLUE benchmark by reporting the standard deviation of performance across multiple runs. The results, summarized in the table below, demonstrate that ExLM exhibits good stability overall:
**Standard Deviation on the GLUE Benchmark:**
| Task | ExLM Performance |
|--------|------------------|
| MNLI-m | 86.9 ± 0.14 |
| QQP | 92.0 ± 0.09 |
| QNLI | 93.1 ± 0.43 |
| SST-2 | 93.9 ± 0.50 |
| CoLA | 64.6 ± 0.81 |
| RTE | 78.8 ± 1.80 |
| MRPC | 89.6 ± 1.00 |
| STS-B | 90.5 ± 0.72 |
| MEAN | 86.2 |
Furthermore, to further verify the effectiveness of ExLM, we also evaluated its performance on the SuperGLUE benchmark [Wang et al., 2019]. As shown in the table below, ExLM significantly outperforms several strong baseline models, further supporting the validity of our proposed approach:
| Model | BoolQ (Acc.) | CB (Acc.) | COPA (Acc.) | MultiRC (F1) |
|-------------------------------|--------------|-----------|-------------|---------------|
| BERT [Devlin et al., 2019] | 74.4 | 83.9 | 63.0 | 68.1 |
| Token Drop [Hou et al., 2022] | 73.0 | 83.9 | 64.0 | 67.7 |
| SCTD [Zhong et al., 2023] | 73.8 | 87.5 | 68.0 | 68.9 |
| ExLM | **76.7** | **88.0** | **69.1** | **71.3** |
---
**Q2: At least one more dataset result (e.g. QQP, RTE) is needed to fully justify the claims.**
**A:** Thank you for the suggestion. In fact, the repeated MLM results on QQP and RTE have already been included in Appendix F. As stated in Section 3.2 of the main text: *“We also provide the results of this experiment on more tasks in Appendix F.”*
---
**Q3: The computation efficiency of ExLM compared to MLM and other MLM variants, e.g., time cost, memory cost.**
**A:** Thank you for your comment. As mentioned in Section 5.3 of the main text, *“Efficiency and entropy analysis of EXLM are provided in Appendix Q”*. Specifically, we present a detailed discussion of ExLM's training time cost in Appendix Q.2, including a comparison with MLM models. Moreover, memory cost analysis is provided in Appendix I.4, which is also referenced in Section 4.3 of the main text: *“The efficiency analysis is provided in Appendix I.”*
---
**Q4: The details of repeated MLM experiments are not clear. I am not sure whether they pre-train the MLM (e.g., on Wiki) and then fine-tune on MNLI, or just fine-tune on MNLI.**
**A:** Thank you for your comment. In each group of repeated MLM experiments, we pre-trained separate MLM models using different values of *k* and *p*, and then fine-tuned and evaluated them on downstream tasks using the same settings. As stated in Appendix B.1 (Pre-training Configuration): *“In the Repeated MLM experiment, we train a series of MLMs with different p and k parameters.”*
---
**References:**
[Devlin et al., 2019] Devlin, Jacob, et al. "Bert: Pre-training of deep bidirectional transformers for language understanding." NAACL 2019.
[Hou et al., 2022] Hou, Le, et al. "Token Dropping for Efficient BERT Pretraining." ACL 2022.
[Zhong et al., 2023] Zhong, Qihuang, et al. "Revisiting Token Dropping Strategy in Efficient BERT Pretraining." ACL 2023.
[Wang et al., 2019] Wang, Alex, et al. "SuperGLUE: A stickier benchmark for general-purpose language understanding systems." NeurIPS 2019.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' feedback and additional numbers. I update my score for that accordingly.
However, I still keep my question on the potential gain of the method, which is mostly evaluated on GLUE dev. There are too many papers doing the same thing achieving similar performance gain. On the other hand, the performance gain is still not pronounced to me, for example, simply comparing to training longer.
I am not sure whether the paper reaches the bar of ICML.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thoughtful feedback and valuable suggestions.
We would like to highlight that, in the original version of the paper, we have compared ExLM and the standard MLM under both SMILES modeling and natural language modeling settings. Furthermore, we have evaluated ExLM across two distinct types of benchmarks—text understanding (e.g., GLUE) and molecular property prediction (e.g., MoleculeNet). In these tasks, ExLM consistently outperforms vanilla MLM, demonstrating stable and significant improvements, which strongly supports the effectiveness of ExLM.
Looking forward, we plan to extend ExLM to protein sequence modeling. ExLM’s enhanced ability to capture long-range semantic dependencies (such as co-evolutionary signals) can be particularly beneficial in protein-related tasks. We also believe that this line of work—enhancing semantic dependency modeling—holds great potential in the broader AI for Science domain.
Once again, we truly appreciate your comments and suggestions. We hope you will continue to follow our work, and we would be happy to discuss further if you have any other questions. | Summary: This paper gives a new way of utilizing [MASK] tokens in masked language models. It first performs an analysis of the semantic aspects of the [MASK] token and then proposes ExLM wherein multiple [MASK] tokens are introduced during pre-training. The authors then propose to utilize a learnt transition to get multiple semantic alignments which makes the model semantically richer and performs better than the previously proposed model on a suite of tasks.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Not rigorously checked.
Experimental Designs Or Analyses: Yes. Experimental designs are sound.
Supplementary Material: I reviewed the proposed dynamic programming alignment strategy.
Relation To Broader Scientific Literature: The paper is of broad interest in understanding and building new language models.
Essential References Not Discussed: @inproceedings{namazifar2021warped,
title={Warped language models for noise robust language understanding},
author={Namazifar, Mahdi and Tur, Gokhan and Hakkani-T{\"u}r, Dilek},
booktitle={2021 IEEE spoken language technology workshop (SLT)},
pages={981--988},
year={2021},
organization={IEEE}
}
The above citation proposed different noising techniques for ASR error robustness. It is somewhat related to this work.
Other Strengths And Weaknesses: Strength:
The ExLM formulation is very interesting and exciting to me. The semantic richness acquired by the model as a result of this formulation can be very useful.
Weakness:
The paper can benefit from a more detailed explanation of the alignment algorithm. I feel like the explanation should be in the main text rather than in the appendix.
Also, the flow of ideas from section 3 to section 4 can be done in a much better way.
Other Comments Or Suggestions: 1. It is unclear how the discussion on the impact of [MASK] (section 3) is essential for ExLM. The findings in section 3 seem fairly obvious. The fact that semantic corruption is the key factor for the learning process is hardly surprising (figure 4).
2. The relation between section 3 and section 4 is not clear.
3. The authors need to explain the alignment process, i.e. the forward-backward algorithm is the main text.
Questions For Authors: 1. "Corrupted semantics" is an obviously introduced noise in that masked language modeling (MLM) is a kind of denoising process. The authors claim that it can "...negatively affect the model's ability to learn accurate semantic representation...". It is not clear why this claim should be non-obvious. Having a large semantic corruption (large p^k) will obviously lead to performance degradation. Therefore, it's value is chosen carefully. But how is it a problem?
2. In appendix H, why is the index of the last state chosen for the final loss? What if the last state is never reached? Is there a null transition? This needs more details.
3. In the ablation, how is ExLM modeled without the transition matrix? Is it like CTC? Needs more details.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable suggestions. In the following, we provide detailed responses to all your concerns regarding the missing reference, model details, writing organization, and implementation issues. These discussions will also be incorporated into the final camera-ready version of our paper. We sincerely welcome any further feedback you may have.
---
**Q1: Missing reference about the noising techniques for ASR error robustness**
**A:** Thank you for your suggestion. We have now added the missing citation to the draft of our paper, and we will also ensure that it is properly included in the final camera-ready version.
---
**Q2: The explanation of algorithm details should be in the main text**
**A:** Thank you for the feedback. We have reorganized the structure of the paper accordingly. In earlier versions, some important methodological details were placed in the appendix due to space limitations. However, since the camera-ready version allows for an additional page, we have moved more details about ExLM—such as the dynamic programming procedure and the forward-backward algorithm—into the main body of the paper.
---
**Q3: The relation between Section 3 and Section 4 is not clear**
**A:** Thank you for the insightful comment. We have revised the writing of Section 4.1 to more clearly highlight the connection between Section 3 and Section 4. Specifically, the analysis in Section 3 demonstrates that *multimodality* is a key factor impacting the performance of MLMs. To enhance ExLM’s capacity for handling multimodality, we propose two mechanisms tailored to address two distinct levels of multimodality:
1. **Intra-token Multimodality**: The possible candidates for each masked token can be highly diverse. To address this, we introduce a *States Expansion* mechanism to construct a larger semantic space, allowing the model to learn richer and more diverse semantic information.
2. **Inter-token Multimodality**: The meaning of a token is intricately related to the meanings of surrounding tokens. To capture this, we incorporate a *Dependency Capture* mechanism, where a transition matrix is used to explicitly model semantic dependencies between different states.
This design clearly establishes the relevance of the findings in Section 3 to the architectural choices in Section 4. These improvements have been added to the camera-ready version.
---
**Q4: The impact of corrupted semantics in MLM is obvious and well-managed, so it’s unclear why it’s framed as a significant issue**
**A:** Thank you for raising this important point. We agree that "corrupted semantics" are intentionally introduced as a form of noise in the MLM denoising process, and such noise is a legitimate component of the training strategy. Indeed, our goal is not to eliminate this noise—on the contrary, we explicitly mention in the paper that a certain level of corrupted semantics can benefit training.
However, **the central issue we aim to address is the unintended side effects this noise may have on model behavior.** Specifically, the purpose of the denoising process is to train the model to recover clean data from noisy inputs. While noise facilitates learning, it also modifies the model's input distribution, which may interfere with the model’s ability to learn accurate representations.
Therefore, a well-designed denoising process should maintain a reasonable noise level while minimizing the adverse effects of noise on model behavior. Our study identifies one such undesirable effect—**multimodality**—arising from corrupted semantics. We aim to mitigate the impact of multimodality without eliminating the beneficial aspects of noise, which is the motivation behind our focus on corrupted semantics in this work.
---
**Q5: Why is the index of the last state chosen for the final loss? What if the last state is never reached?**
**A:** Thank you for the question. The reason we choose the index of the last state (i.e., `[EOS]`) for the final loss is that we enforce a structural constraint: every valid decoding path in the DAG must terminate at the last state. This design eliminates the need to enumerate all possible ending states when summing the probabilities of valid paths, thereby reducing computational complexity.
Moreover, the DAG in ExLM is constructed in a way that guarantees the reachability of the last state from all other nodes. Thus, the last state is always reachable, and the validity of the loss computation is ensured. We will include these details in the camera-ready version.
---
**Q6: How is ExLM modeled without the transition matrix? Is it like CTC?**
**A:** Yes, your understanding is exactly correct. When the transition matrix is not used, ExLM is trained in a manner similar to CTC (Connectionist Temporal Classification), where the model considers all possible alignments between the expanded states and the target tokens. More implementation details on this setup will be added to the camera-ready version of the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications and modifications. I will keep my score unchanged to 4 and suggest acceptance of the paper for the principled problem formulation and analysis.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for reading our rebuttal and for your encouraging words! We sincerely appreciate your insightful comments—they have been truly inspiring and instrumental in improving our paper. Your response means a great deal to us, and we warmly welcome any further communication.
Authors | Summary: This work presents a deeper analysis into the effectiveness of mask token in MLM pre-training.
The authors argue that the conventional use of [MASK] tokens can lead to a "corrupted semantics problem" where the masked context may become ambiguous and lead to multiple interpretations.
To highlight this issue, the authors conduct a series of analytical studies, namely Repeated MLM, showing that the corrupted semantics problem as a more significant factor than the unreal token problem that can affect the performance of MLMs when applied to downstream tasks.
To address this challenge, the paper introduces anovel pre-trained model ExLM.
The key idea of ExLM is the expansion of [MASK] tokens in the input context into multiple hidden states, which allows the model to model a larger semantic space.
By doing so, ExLM aims to increase the context capacity and capture richer semantic information, thereby reducing the ambiguity introduced by the masked tokens and reducing the semantic multimodality in token prediction.
Experiments on various NLU tasks demonstrate the effectiveness of ExLM.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: The soundness of evaulation experiments has some concerns.
First, the authors do not provide explanations about why they take SMILES as the evaluation task.
It seems that the patten in molecular information cound be quite different with that in textual information.
There should be some explanations about why the authors take this task and why the proposed method can work (as the preliminary analysis are mainly made on MNLI which is an NLU task).
Second, I think there should be more results on NLU tasks to demonstrate the effectiveness of ExLM, because part of the performance gains shown in Table 3 are not significant enough.
Supplementary Material: No.
Relation To Broader Scientific Literature: This work contributes to the prior analysis about the effectiveness of mask tokens.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: The main weakness lies in the experiments.
Other Comments Or Suggestions: No.
Questions For Authors: Can you report the std of the results in Table 1 and Table 3? Some performance gains in these tables are quite marginal, so the std coul d hlep to clarify the significance of the results.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and constructive feedback. We sincerely appreciate your valuable suggestions. Below, we address all your concerns regarding the SMILES tasks, the standard deviation of model performance, and additional NLU tasks. The relevant results will be incorporated into the final camera-ready version of our paper. We hope our responses help clarify the work, and we welcome any further comments.
---
**Q1: Why do they take SMILES as the evaluation task?**
**A:** Thank you for the question. The reason for selecting SMILES as the evaluation task is that masked language modeling (MLM) on SMILES also encounters issues similar to those observed in natural language modeling. We conducted Repeated MLM experiments on the BACE task (a SMILES classification task), and the results are shown in the table below. These results exhibit a trend consistent with Repeated MLM experiments on natural language data, indicating that semantic corruption significantly affects SMILES modeling:
| k \ P | 2.25% | 15% | 38.7% | 62.2% | 78.9% |
| ----- | ----- | ---- | ----- | ----- | ----- |
| 1 | 57.6 | 73.5 | 70.4 | 68.6 | 63.0 |
| 2 | | 69.5 | 72.7 | 69.6 | 65.3 |
| 4 | | | 70.1 | 72.0 | 67.2 |
| 8 | | | | 69.9 | 70.7 |
These findings can be attributed to the presence of strong semantic dependencies in SMILES representations. For instance, forming valid molecular functional groups often requires precise coordination among multiple atoms, which conventional MLM fails to capture due to its inability to model such semantic associations effectively.
---
**Q2: The standard deviation of the results in Table 1 and Table 3**
**A:** Thank you for the helpful suggestion. We have computed the standard deviations of ExLM’s performance on both the GLUE and MoleculeNet benchmarks. The results are summarized in the tables below. Overall, ExLM demonstrates good stability across these benchmarks:
*Standard deviation on MoleculeNet benchmark:*
| Task | ExLM Performance |
| ------- | ---------------- |
| BACE | 79.6±0.8 |
| BBBP | 72.8±1.2 |
| Tox21 | 78.2±0.1 |
| SIDER | 64.5±0.7 |
| MUV | 78.8±0.6 |
| ClinTox | 91.6±1.8 |
| ToxCast | 66.9±0.3 |
| Mean | 76.1 |
*Standard deviation on GLUE benchmark:*
| Task | ExLM Performance |
| ------ | ---------------- |
| MNLI-m | 86.9±0.14 |
| QQP | 92.0±0.09 |
| QNLI | 93.1±0.43 |
| SST-2 | 93.9±0.5 |
| CoLA | 64.6±0.81 |
| RTE | 78.8±1.8 |
| MRPC | 89.6±1.0 |
| STS-B | 90.5±0.72 |
| MEAN | 86.2 |
---
**Q3: There should be more results on NLU tasks to demonstrate the effectiveness of ExLM**
**A:** Thank you for the excellent suggestion. We further evaluated the performance of ExLM on the SuperGLUE benchmark [Wang et al., 2019]. As shown in the table below, ExLM outperforms several baseline models by a notable margin, which demonstrates the effectiveness of our method:
| | BoolQ (Acc.) | CB (Acc.) | COPA (Acc.) | MultiRC (F1) |
| ---------- | ------------ | --------- | ----------- | ------------- |
| BERT [Devlin et al., 2019] | 74.4 | 83.9 | 63.0 | 68.1 |
| Token Drop [Hou et al., 2022] | 73.0 | 83.9 | 64.0 | 67.7 |
| SCTD [Zhong et al., 2023] | 73.8 | 87.5 | 68.0 | 68.9 |
| ExLM | **76.7** | **88.0** | **69.1** | **71.3** |
We hope these additional results provide more clarity and demonstrate the strong performance of our proposed model. Please let us know if further information are needed!
**References:**
[Devlin et al., 2019] Devlin, Jacob, et al. "Bert: Pre-training of deep bidirectional transformers for language understanding." NAACL 2019.
[Hou et al., 2022] Hou, Le, et al. "Token Dropping for Efficient BERT Pretraining." ACL 2022.
[Zhong et al., 2023] Zhong, Qihuang, et al. "Revisiting Token Dropping Strategy in Efficient BERT Pretraining." ACL 2023.
[Wang et al., 2019] Wang, Alex, et al. "Superglue: A stickier benchmark for general-purpose language understanding systems." Advances in neural information processing systems 32 (2019). | null | null | null | null | null | null |
CACTI: Leveraging Copy Masking and Contextual Information to Improve Tabular Data Imputation | Accept (spotlight poster) | Summary: The authors propose a new method to impute missing data in tabular data sets. This method, named CACTI, incorporate three components : a mask autoencoding approach, a median truncated copy-masking training strategy and the use of semantic relations between features.
The proposed method is evaluated across 10 real-world data sets with different missingness scenario (MCAR, MAR and MNAR) and different metrics related to pointwise imputation, distributional imputation or predictive performances.
Claims And Evidence: The method is well described and the experimental protocol is well explained.
Regarding the way masks are generated : since they are permuted across observations, I agree that the dependence between missingness components is preserved. However, if masks are MAR, that is the probability of missingness depends on the observed values, permuting masks will break this dependence. Thus, the proposed strategy seems valid only in MCAR setting, with possible dependencies between missingness components. Thus, it is not valid for any type of missingness. Please correct me if this is not right.
Methods And Evaluation Criteria: The experimental protocol is sound. In table 1, it would be interesting to add CACTI without the context awareness. Such a method is more comparable to classic method such as MissForest, and it would be interesting to see how they compare to each other (as I understand Table 2 does that but only on few data sets).
It could also be interesting to test the robustess to mislabeled input variables : what happens in terms of predictive performances, when the description of columns are permuted ?
Theoretical Claims: NA
Experimental Designs Or Analyses: The experimental protocol is sound. Many baselines from classic imputation techniques to deep learning strategies are compared.
Supplementary Material: I did not review the supplementary materials
Relation To Broader Scientific Literature: The literature is well cited with respect to missing data imputation.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: This paper propose to combine different existing ingredient to impute missing data. While there is no theory, and while the techniques are not totally new, the empirical protocol is exhaustive, testing for different missing data mechanisms, different datasets, and comparing several baselines. The empirical evaluation is thorough and is interesting on its own, in my opinion.
Other Comments Or Suggestions: Typos :
- page 1, second column : ‘can arise due due to’
- page 2 line 81, where $k \in [K]$ a verb is missing here
- page 2, line 66 : ‘a direct way to to incorporate’
- page 3 line 134, ‘contextual awareness to the to improve’
- page 3 line 158 since $m=1$ corresponds to an observed entry, I would say that feature $i$ is observed any time $j$ is observed
- page 5, line 273, so can we simulate
- page 7 line 336 just using the the feature context
Questions For Authors: -
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the helpful feedback and questions.
## Comments Addressed
We've added CMAE (CACTI without context) as an additional benchmark in the main results table (qrHN rebuttal Table. R1). We’ve also added a comparison between CACTI, CMAE to quantify the statistical significance of the improvements driven by context awareness (qrHN rebuttal Table. R2). We hope these results further clarify that context yields statistically significant (p<0.05) improvement over masked learning alone and is a useful source of inductive bias to improve imputation. We really appreciate the reviewer's attention to detail and highlighting the typos. They have now been addressed, and we apologize for the oversight.
Next, our theoretical results in Appendix G show that copy masking can be *more* effective than random masking under MNAR. For MAR, consider a structured MAR setting [1] where the missing variables are dependent on some observed variable but also have additional dependencies, e.g., feature i is missing if feature j is missing. Such settings are quite common in questionnaire and other datasets. Under this setting, copy masking is still able to maintain the *structured* missingness between the variables whereas random masking completely diverges from the missingness process. The crux of our argument in Appendix G is that copy masking is better suited to modeling MAR and MNAR than random masking. For example, unlike random masking, applying copy masking to MAR results in masking only the missing features. This encourages the MAE to learn a conditional model that maximizes the probability of the masked feature given the observed features.
## Contributions
While MAEs with random masking (ReMasker) and MLPs with naive copy masking (AutoComplete) have both been used to tackle imputation in prior works, we believe our work makes novel, concrete contributions.
1. MT-CM, an upgrade to copy masking, is a novel, key contribution. As highlighted by our results in Table A7 the application of naive copy masking to transformer-based MAE architecture leads to suboptimal performance and reduced learning efficiency. This is a significant gap in the field, as transformers are arguably the most popular architecture in current literature. MT-CM enables the application of copy masking effectively to such architectures and creates an upper bound on the worst-case scenario in batch-based training, i.e. the proportion of null tokens in any batch is upper bounded by 50%. Our MT-CM strategy unlocks the effective use of copy masking for any transformer-based masked learning model and can have applications beyond just tabular imputation.
2. CACTI is the first approach (to the best of our knowledge) to effectively leverage context as a useful inductive bias and show statistically significant improvements in various imputation settings. Our results highlight the importance of investigating new inductive biases as a promising direction for future tabular imputation research, particularly in domains with MNAR missingness.
3. Our results establish CACTI as a strong baseline for evaluating and developing future approaches in tabular imputation.
4. Although our theoretical section (Appendix G) is not highlighted in the main text (due to space restrictions), we believe we do provide a new empirical risk framework for evaluating MAE training without fully observed data (previous works don’t consider the probabilistic nature of the missingness process and only view the objective as representation loss minimization), provide the first formal motivation for copy masking and show how random masking is suboptimal compared to copy masking in non-MCAR settings. If given the chance we would like to introduce this theoretical framework in the main text in the final draft.
We hope this addresses the reviewer's concerns and find this paper to be a valuable contribution to the current literature. We would appreciate their reconsideration of their overall recommendation. And as always we're happy to address any additional questions or concerns!
[1] Jackson, J., et. al. A complete characterisation of structured missingness, 2023. | Summary: This paper addresses the missing data imputation problem by using a transformer-based architecture that leverages the missing patterns and textual information about features as inductive biases to improve imputation accuracy. Specifically, the paper proposes a median truncated copy masking training strategy that captures the empirical missing patterns of data during training. Then, the method exploits the feature name information by extracting language model embedding of the feature name and taking it as the input of the imputation model to capture the context information. The experiments show that these two inductive biases improve the accuracy of imputation.
Claims And Evidence: The claims in the paper are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the target problem.
Theoretical Claims: The overall proofs in the supplementary are sound.
Experimental Designs Or Analyses: The experimental designs is comprehensive with different data set and methods are included. Also, the hyperparameter and ablation study are included.
Supplementary Material: I have review the supplementary material for the theoretical proof and experiment parts.
Relation To Broader Scientific Literature: The study is focused on missing data Imputation with inductive biases of missing patterns and feature name description which is applicable to any domain that has MNAR data and whose feature description conveys important information, for example, healthcare.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
The writing and organization of the paper are clear. The experiment is also comprehensive and includes many data sets and methods. Also, ablation studies and the effect of hyperparameters are also included. The paper also theoretically proves the rationale of why the proposed masking strategy is better than the naïve random masking.
Weakness
The results tables could contain standard deviation to show the statistical significance.
Other Comments Or Suggestions: No
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review and suggestions. We would like to highlight that all the individual per dataset/missingness condition results for all analyses performed in this paper were previously reported with 95% confidence intervals in the Appendix. We choose to exclude standard errors (SEs) in the main text to ensure visual parsimony and due to space constraints. Please see the silhouette for the main benchmarking results table with SEs reported in Table. R1 which we will include in the final version. Finally, we have a new set of results in Table. R2, which shows that CACTI demonstrates, via a paired t-test, statistically significant (p < 0.05) improvement in R2 and RMSE over all missingness conditions when compared to the next best method (ReMasker). This relative ranking improvement holds even after we exclude the 4 datasets used for the sensitivity and ablation analysis (Table. R3). This provides further evidence to show that leveraging inductive biases from dataset-specific missingness patterns and context information can help improve imputation accuracy. Finally, if the reviewer believes we have sufficiently addressed their concerns and finds this paper to be a valuable contribution to the current literature, we would appreciate their consideration in increasing their score. As always, we're happy to address any additional questions or concerns!
**[Please note that we’re only reporting the top few methods since we’re limited to 5000 characters per rebuttals, final version of the paper will contain all methods when appropriate]**
**Table. R1**: Main benchmarking results with standard errors in parentheses and two sets of results indicate train\|test.
| Method | |R2 | | |RMSE | |
|--------|--------|--------|--------|--------|--------|--------|
| | MCAR | MAR | MNAR | MCAR | MAR | MNAR |
| CACTI | 0.455\|0.461 (0.002)\|(0.003) | 0.469\|0.467 (0.010)\|(0.011) | 0.458\|0.456 (0.003)\|(0.004) | 0.663\|0.640 (0.008)\|(0.005) | 0.675\|0.694 (0.016)\|(0.016) | 0.683\|0.666 (0.004)\|(0.006) |
| CMAE | 0.441\|0.447 (0.002)\|(0.002) | 0.459\|0.460 (0.009)\|(0.010) | 0.440\|0.439 (0.002)\|(0.003) | 0.673\|0.653 (0.008)\|(0.007) | 0.685\|0.696 (0.017)\|(0.016) | 0.699\|0.691 (0.005)\|(0.008) |
| ReMasker | 0.437\|0.438 (0.002)\|(0.002) | 0.445\|0.443 (0.010)\|(0.010) | 0.402\|0.402 (0.003)\|(0.004) | 0.681\|0.665 (0.008)\|(0.006) | 0.691\|0.712 (0.017)\|(0.014) | 0.729\|0.709 (0.005)\|(0.006) |
| DiffPuter | 0.400\|0.415 (0.003)\|(0.004) | 0.386\|0.430 (0.010)\|(0.012) | 0.363\|0.372 (0.004)\|(0.004) | 0.731\|0.704 (0.009)\|(0.005) | 0.770\|0.752 (0.020)\|(0.021) | 0.794\|0.767 (0.024)\|(0.024) |
| HyperImpute | 0.406\|0.382 (0.003)\|(0.004) | 0.439\|0.391 (0.010)\|(0.011) | 0.393\|0.347 (0.005)\|(0.005) | 0.722\|0.734 (0.007)\|(0.011) | 0.727\|0.774 (0.017)\|(0.016) | 0.757\|0.776 (0.006)\|(0.007) |
**Table. R2**: T-test to quantify the statistical significance of the contributions of MT-CM and context awareness (CMAE = CACTI w/o context).
| Metric | Missingness | Target Method | Baseline Method | Diff. Est. | P Value |
|---|---|---|---|---|---|
| | All | CACTI | ReMasker | 0.034 | 4.4e-7 |
| | All | CACTI | CMAE | 0.013 | 1.1e-5 |
| | All | CMAE | ReMasker | 0.021 | 4.2e-5 |
| | MCAR | CACTI | ReMasker | 0.023 | 5.5e-4 |
| | MCAR | CACTI | CMAE | 0.014 | 3.e-3 |
| | MCAR | CMAE | ReMasker | 0.017 | 8.9e-3 |
| R2 | MAR | CACTI | ReMasker | 0.025 | 2.3e-2 |
| | MAR | CACTI | CMAE | 0.007 | 9.4e-2 |
| | MAR | CMAE | ReMasker | 0.018 | 4.8e-2 |
| | MNAR | CACTI | ReMasker | 0.054 | 1.1e-4 |
| | MNAR | CACTI | CMAE | 0.017 | 8.7e-4 |
| | MNAR | CMAE | ReMasker | 0.037 | 4.6e-4 |
**Table. R3**: Results excluding the 4 datasets used for sensitivity analysis.
| Method | | R2 | | | RMSE | | | WD | |
|-----------|-----------|-----------|-----------|------------|------------|-----------|-----------|-----------|-----------|
| | MCAR | MAR | MNAR | MCAR | MAR | MNAR | MCAR | MAR | MNAR |
| CACTI | 0.45\|0.45 | 0.47\|0.46 | 0.46\|0.46 | 0.67\|0.64 | 0.68\|0.71 | 0.69\|0.66 | 3.19\|3.23 | 1.36\|1.41 | 3.31\|3.36 |
| CMAE | 0.44\|0.44 | 0.47\|0.46 | 0.45\|0.44 | 0.68\|0.66 | 0.69\|0.70 | 0.70\|0.68 | 3.22\|3.27 | 1.41\|1.45 | 3.39\|3.44 |
| ReMasker | 0.44\|0.44 | 0.45\|0.45 | 0.42\|0.42 | 0.68\|0.66 | 0.69\|0.72 | 0.72\|0.68 | 3.24\|3.28 | 1.69\|1.72 | 3.33\|3.38 |
| DiffPuter | 0.41\|0.43 | 0.41\|0.46 | 0.38\|0.39 | 0.73\|0.69 | 0.74\|0.73 | 0.80\|0.76 | 3.33\|3.32 | 1.89\|1.53 | 3.43\|3.41 |
| HyperImpute | 0.40\|0.37 | 0.43\|0.38 | 0.39\|0.34 | 0.74\|0.75 | 0.73\|0.79 | 0.77\|0.76 | 2.75\|3.27 | 1.46\|1.86 | 2.76\|3.12 | | Summary: The authors introduce CACTI (Context Aware Copy masked Tabular Imputation) for imputing missing values in tabular data. CACTI’s backbone is a Masked Autoencoder based on Transformers. It brings the following key modifications to this architecture:
- Instead of randomly masking observed values as in ReMasker, it uses copy-masking: additional missing values are introduced in sample i by applying the missingness pattern of another sample j. These additional missing values will then serve to create the reconstruction loss. Copy-masking allows to exploit the missing patterns present in the dataset.
- Copy-masking is further enhanced with Median Truncated Copy Masking (MT-CM), which truncates all samples in a batch to have a maximum number of observed features. This max. number is defined by the median number of observed features in a batch. This prevents from having too many missingness tokens to process.
- Context-awarness is achieved by concatenating column description embeddings (obtained with GTE-en-MLM-large) with the numerical embeddings of scalar values.
This architecture is evaluated on 10 datasets, across varying missingness rates and the 3 missingness mechanisms (MCAR, MAR, MNAR). The imputation is evaluated according to the reconstruction R2, RMSE, and Wasserstein distance. CACTI is evaluated against 13 baselines and shows state-of-the-art performance across the 3 missingness mechanisms.
Claims And Evidence: The claims are supported by convincing evidence.
Methods And Evaluation Criteria: Methods and evaluation criteria make sense.
Theoretical Claims: A “theoretical framework to motivate the need for copy masking and why random masking might not be optimal in non-MCAR settings” is provided in Appendix G. Yet it is not exposed in the main paper, and does not contain a clear proposition or theorem, so I did not consider it as a contribution.
Experimental Designs Or Analyses: yes
Supplementary Material: I quickly went through the supplementary material.
Relation To Broader Scientific Literature: This paper provides an imputation model for tabular data. Important references for this task are cited in the introduction. A particularity of this approach is that it is meant to also be effective in MNAR, while many methods are only valid under M(C)AR (although the theoretical justification for this is not clearly stated).
Essential References Not Discussed: I think that ReMasker and [3], which appear to be the cornerstones of this work (respectively for using a MAE with random masking for imputation, and for using copy-masking) could be introduced with more details in the related work.
For example, the authors say: “Our first contribution is in extending an approach introduced by [3] of using “… To clearly identify contributions, it would be easier to first introduce copy-masking as a related work, and then present the proposed extension as a contribution.
[3] An et al 2023, Deep learning-based phenotype imputation on population-scale biobank data increases genetic discoveries.
Other Strengths And Weaknesses: **Strengths**
- The improvements in the imputation performance, notably under MNAR, are large. +0.02 (MCAR) / +0.03 (MAR) / +0.06 (MNAR) R2 pts compared to the next best model ReMasker.
- Many ablation studies help identify the most critical parts of a model, providing valuable insights for guiding future research.
**Weaknesses**
- I found the paper difficult to follow. The explanations rely heavily on notations, which I believe unnecessarily complicates the presentation. Some concepts require significant effort to grasp. The writing could be improved by reducing unnecessary notations and incorporating figures to clarify key ideas, notably copy-masking.
- The results do not convincingly show that using the context improves imputation. We see on Table 2 that using the context on top of random masking improves the R2 from 0.20 to 0.26. However, the result is obtained over 4 datasets only. Moreover, when copy-masking is used instead of random masking, the effect of the context disappears in MAR and becomes very small in MNAR, potentially not significant. From these experiments, it is hard to tell whether using context can help imputations.
- Are the 4 datasets of the ablation studies part of the 10 evaluation datasets? If yes, as it seems that CACTI’s hyperparameters were chosen based on these 4 datasets, it would mean that hyperparameters were chosen on part of the test set, and that the performances may be inflated for these datasets.
Other Comments Or Suggestions: * On Fig 2., it may be more informative to plot the R2 relative to the mean R2 across methods for a given dataset and missingness mechanism.
* The results in Table 4 on model architecture (encoder and decoder depth and embedding size) show that the smallest options considered are almost the best. It is not clear that an encoder depth of 10 provides significantly better results that a depth of 4. Exploring smaller depths and embedding size would be useful, as the smaller the model the better (given a fixed performance).
* Given the results presented in Figure 3 on Masking rate sensitivity, where increasing the copy masking probability improves performances up to 99% (the maximum probability tested), it could make sense to remove this hyperparameter and apply copy-masking 100% of the time.
* The default hyperparameters of competing method should be specified in Appendix to ensure reproducibility.
* Is the [MASK] token the same for all features?
**Typos:**
l.11: due due
l.66: to to
l.134: to the to
l.214: with a detail description
l.239: we conduct a through ablation
l.271: CACTI’s out performs
l.313: of the of the top 5
Questions For Authors: * L. 135, the authors state: “we perform a row-wise permutation to create a mask”. Row-wise permutation would mean shuffling elements within each row. Based on other elements in the paper, notably the permutation matrix of size N by N in Algorithm 1, I think that the authors do row-shuffling (eg swapping row I with row j). What do the authors do?
* CACTI outperforms ReMasker by 0.03 R2 pts in MAR (0.06 in MNAR) on average over 10 datasets, but CACTI outperforms RMAE, which uses ReMasker’s random masking, by a much larger margin (+0.25 R2 pts in MAR and MNAR) on 4 datasets. Why such difference in the size of the improvement given that both RMAE and ReMasker use random masking?
* To leverage the missingness structure, a straightforward competing approach would be to include the mask as additional input features for the imputation model. This may not be straightforward for all methods, but it is for methods such as missforest.
* Do the 10 evaluation datasets include the 4 datasets used for exploring the architecture hyperparameters?
* How were the evaluation datasets chosen?
* What do the column description look like in the 10 evaluation datasets? Were they chosen so that these descriptors are meaningful?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback.
## Does context help?
Our ablation analysis focused on the *relative* contributions of median truncated copy masking (MT-CM) and context awareness with fixed architecture settings and hyperparameters (Appendix C.2). Random masking autoencoder (RMAE) and ReMasker perform nearly identically when hyperparameters match (internally verified). Performance differences primarily stem from different masking rates: 30% for ReMasker (optimal per original paper) versus 90% for CACTI/RMAE.
To address potential ambiguity, we extended our analysis of CMAE (CACTI without context but with MT-CM) from 4 to all 10 datasets (qrHN rebuttal Table. R1). CMAE consistently improves or matches ReMasker across R2, RMSE, and WD metrics, while CACTI strictly dominates CMAE in all conditions. This confirms that context can improve imputation accuracy. We also performed a comparison using one-sided paired t-tests (qrHN rebuttal Table. R2). CACTI shows statistically significant (p<0.05) improvement over CMAE for R2 (and RMSE) in most settings, reinforcing that context enhances imputation. MT-CM (via CMAE) delivers statistically significant R2 improvements over random masking (via ReMasker) across all settings. CACTI, combining both approaches, shows significant improvement across nearly all metrics and conditions. While MT-CM provides larger improvements, context contributes meaningfully, demonstrating that both MT-CM and context awareness provide useful inductive biases.
## Datasets
We selected 10 open datasets from those used by Hyperimpute, ReMasker, and Diffputer authors for fair comparison. Our selection criteria were: 1) 6 with mixed features and 4 with continuous-only features datasets to ensure type diversity, and 2) half the datasets having feature descriptions. Table A9 will be expanded with information about which datasets have column descriptions. Our anonymous repo provides code to obtain all datasets and generate colinfo.txt files (see "Generating UCI ML Datasets" in README).
The 4 datasets used in ablation and sensitivity analysis were randomly chosen from the 10 used in main benchmarking. While this approach is common in prior works, we acknowledge the potential risk of inflating overall results. To maintain fairness, we used similar/same datasets and optimal parameters from previous works. To directly address this concern, we re-analyzed our main benchmarking results excluding these 4 datasets (qrHN rebuttal Table. R3). Results confirm CACTI still strictly outperforms existing state-of-the-art methods in R2 and RMSE, addressing concerns about fairness and result inflation.
## Writing
We appreciate the reviewer highlighting these typos. While our notation was chosen for precision, we will move significant portions to the appendix, replacing them with more intuitive text descriptions and figures. ReMasker and [3] are indeed key previous works and we’ll better highlight ReMasker and move the discussion on copy masking from methods to related works. The [MASK] token is indeed fixed and identical for all features, following standard practice in MAEs and ViTs. The hyperparameters of baseline method will be specified in an appendix table. Finally, we apologize for the ambiguity - we mean row-wise shuffling (permuting rows of the observed missingness mask matrix). All these will be fixed in the final draft.
## Other Comments
Regarding the comment about including “the mask as additional input features for the imputation model”, we included notMIWAE, which leverages this idea. It maximizes the ELBO of the joint likelihood of the observed features and missing pattern via IWVAE, this ensured we compared against methods that directly factor in missingness structure. As seen in Table 2, we outperform notMIWAE.
Next, while higher copy-masking rates sometimes improve performance, keeping this as a tunable hyperparameter is justified because: 1) performance decreases at rates >95% in MNAR settings for R² and WD; 2) our conservative 90% rate maximizes average performance across sensitivity analysis datasets but isn't universally optimal; 3) practitioners often prioritize specific features based on domain needs, making flexible masking valuable; and 4) optimal masking rates vary significantly across datasets. Next, the embedding size and depths chosen here perform consistently well with reasonable resource usage (<200MB GPU RAM). Since imputation is rarely the end application, we maintain these as tunable parameters for users to optimize their specific use cases.
Finally, we refer the reviewer to point 4 in the contributions section of rebuttal to reviewer eZeh, which addresses concerns about the theoretical contributions.
We hope our responses have addressed the reviewer's primary concerns and demonstrated the paper's contribution to current literature. If so, we would appreciate a reconsideration of the overall recommendation. We're happy to address any additional questions!
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses.
According to the new tables provided, it seems that the context (i.e. using CACTI rather than CMAE) improves performances slightly but significantly in MCAR and MNAR, but not in MAR (all for a missingness proportion of 30%).
These results do not entirely convince me that the context significantly improves performances (ie CACTI relative to CMAE), because they are based on a limited number of datasets, at a single missing rate, and are significant for 2/3 missingness mechanisms.
To see results per dataset rather than on average, could the authors provide a scatterplot (or a table if easier) showing the performance of CMAE vs CACTI for each dataset, identifying which dataset has feature descriptions?
For future work, I think it would be an interesting sanity check to permute feature names and descriptions in a dataset, and check whether this affects performances.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for engaging in this discussion! We extended our analysis to all missingness percentages and settings: context provides a statistically significant improvement (p<0.05) in all settings except for MAR@30% (Table R4). CACTI also outperforms CMAE (win rate) in a majority of the datasets. Please see the requested per dataset results for 30% missingness in Table R5.
Additionally, all datasets have column names but some also have extended column descriptions. Column relatedness will vary across datasets, and column names alone may contain sufficient semantic information. The following datasets have column descriptions for all features: california, magic, letter, obesity, students. We also humbly note that the number and diversity of evaluation datasets are on par with the evaluation procedure used for the most recently published imputation approaches such as DiffPuter (ICLR’25) and ReMasker (ICLR’24) to allow for a fair comparison with current literature. And indeed, it would be informative to assess the sensitivity of permuting the column description assignments and/or replace them with random information in future work.
Finally, while we claim that context *can* be a useful source of inductive bias, it’s contribution can vary. In settings where context is not helpful, the model can choose to ignore it. We hypothesize that MAR is probably the most straightforward setting for MAEs since only a subset of the features are masked (as opposed to all features in MCAR and MNAR). Here CACTI would simply learn a model that maximizes the probability of the masked feature given the observed. We expect context to be most useful in the most difficult MNAR cases where the imputation task is more complex.
**Due to the 5K character limit, we are unable to include the per datasets results for 10, 50 and 70 percent missingness here. If the reviewer would like to see these results we kindly ask them to reply to this rebuttal to give us a chance to add these results.**
**Table. R4**: T-test to quantify the statistical significance of the contributions of context awareness (CACTI v CMAE) in all missingness percentages. Win rate = % of datasets where CACTI > CMAE.
| Missingness | Miss % | P Value | Diff. Est. |Win Rate % |
|-----------|-----------|----------|--------| --------|
| MAR | 10 | 7.9e-03 | 0.023 | 89 |
| MAR | 30 | 9.4e-02 | 0.007 | 70|
| MAR | 50 | 3.7e-02 | 0.010 | 70|
| MAR | 70 | 7.2e-03 | 0.012 | 90|
| MCAR | 10 | 2.0e-02 | 0.014 | 80|
| MCAR | 30 | 3.5e-03 | 0.014 | 80|
| MCAR | 50 | 6.5e-03 | 0.011 | 90|
| MCAR | 70 | 2.0e-02 | 0.010 | 89|
| MNAR | 10 | 2.3e-03 | 0.017 | 80|
| MNAR | 30 | 8.7e-04 | 0.017 | 100|
| MNAR | 50 | 1.3e-03 | 0.016 | 100|
| MNAR | 70 | 4.6e-04 | 0.015 | 100|
**Table. R5**: R2 performance estimates for all datasets, missingness types, at 30% missingness. Delta = the increase in CACTI R2 relative to CMAE.
| Dataset | Missingness | CACTI | CMAE | Delta |
|------------|-----------|-------|-------|----------|
| bike | MAR | 0.560 | 0.538 | 0.0222 |
| bike | MCAR | 0.553 | 0.538 | 0.0147 |
| bike | MNAR | 0.551 | 0.525 | 0.0261 |
| california | MAR | 0.435 | 0.444 | -0.00907 |
| california | MCAR | 0.412 | 0.377 | 0.0348 |
| california | MNAR | 0.390 | 0.344 | 0.0460 |
| default | MAR | 0.632 | 0.629 | 0.00261 |
| default | MCAR | 0.569 | 0.553 | 0.0157 |
| default | MNAR | 0.533 | 0.518 | 0.0146 |
| income | MAR | 0.246 | 0.246 | 0.000169 |
| income | MCAR | 0.286 | 0.284 | 0.00152 |
| income | MNAR | 0.306 | 0.301 | 0.00435 |
| letter | MAR | 0.818 | 0.804 | 0.0131 |
| letter | MCAR | 0.804 | 0.793 | 0.0107 |
| letter | MNAR | 0.779 | 0.768 | 0.0104 |
| magic | MAR | 0.517 | 0.525 | -0.00879 |
| magic | MCAR | 0.554 | 0.558 | -0.00373 |
| magic | MNAR | 0.582 | 0.572 | 0.00987 |
| obesity | MAR | 0.358 | 0.362 | -0.00419 |
| obesity | MCAR | 0.293 | 0.280 | 0.0132 |
| obesity | MNAR | 0.292 | 0.284 | 0.00857 |
| shoppers | MAR | 0.402 | 0.398 | 0.00459 |
| shoppers | MCAR | 0.370 | 0.370 | -0.00022 |
| shoppers | MNAR | 0.394 | 0.381 | 0.0133 |
| spam | MAR | 0.251 | 0.210 | 0.0403 |
| spam | MCAR | 0.260 | 0.230 | 0.0297 |
| spam | MNAR | 0.238 | 0.210 | 0.0286 |
| students | MAR | 0.454 | 0.446 | 0.00750 |
| students | MCAR | 0.512 | 0.486 | 0.0253 |
| students | MNAR | 0.494 | 0.482 | 0.0127 | | null | null | null | null | null | null | null | null |
BSLoRA: Enhancing the Parameter Efficiency of LoRA with Intra-Layer and Inter-Layer Sharing | Accept (poster) | Summary: This paper proposes Bi-Share LoRA (BSLoRA), a parameter-efficient fine-tuning approach for large language models (LLMs). The key idea is to improve upon standard Low-Rank Adaptation (LoRA) by introducing intra-layer and inter-layer parameter sharing to reduce the number of trainable parameters while maintaining or improving model performance.
Claims And Evidence: The paper makes the following key claims, and most of them are supported by evidence in the experiments:
- Claim 1: BSLoRA reduces trainable parameters.
- The results show that BSLoRA achieves around 50% parameter reduction while maintaining comparable or slightly improved performance over standard LoRA.
- Limitation: The improvement is quite marginal, which raises concerns about whether the added complexity justifies the gains.
- Claim 2: The proposed shape transformation techniques enable flexible sharing.
- The introduction of Kronecker Extension, Gate Transformation, and Slice Sharing provides different strategies for handling parameter shape mismatches, making parameter sharing feasible across different model components.
Methods And Evaluation Criteria: The proposed BSLoRA method aligns with the general objective of improving parameter-efficient fine-tuning (PEFT) in LLMs by reducing trainable parameters while maintaining model performance.
Theoretical Claims: There are no deep theoretical contributions or proofs in this paper.
Experimental Designs Or Analyses: - The benchmarks (commonsense reasoning and MMLU) are widely used in LLM research and provide a fair basis for evaluating performance.
- The baselines (LoRA, VeRA, VB-LoRA, ShareLoRA, Tied-LoRA) are appropriate and competitive, ensuring that comparisons are meaningful.
- The absence of throughput, training speed, and memory usage comparisons in the main paper weakens the argument for parameter efficiency.
Supplementary Material: Reviewed memory usage analysis: It suggests that BSLoRA's advantage becomes significant when the number of tasks exceeds 200.
Relation To Broader Scientific Literature: The paper is related to prior work on parameter-efficient fine-tuning (PEFT):
- LoRA (Hu et al., 2022): The foundational method that BSLoRA extends.
- VeRA (Kopiczko et al., 2024) and VB-LoRA (Li et al., 2024): Parameter-sharing approaches that BSLoRA compares against.
- MultiLoRA (Wang et al., 2023) and HydraLoRA (Tian et al., 2024): Related work on adapting LoRA for multi-task learning.
the core techniques used (low-rank factorization, Kronecker product, and parameter sharing) are not new.
Essential References Not Discussed: The paper covers the most relevant prior work.
Other Strengths And Weaknesses: ## Strengths
- The proposed multi-layer parameter-sharing strategy is well-motivated and provides clear parameter savings.
- The experiments are thorough, with evaluations of multiple models and tasks.
- Shape transformation techniques (Kronecker Extension, Gate Transformation) effectively solve parameter sharing constraints.
## Weaknesses
- Limited empirical improvements: The performance gain is marginal, making it unclear whether the complexity trade-off is justified.
- No throughput/memory efficiency evaluation: The paper claims efficiency improvements, but no evidence of speedup or memory savings is presented in the main paper.
- Not highly novel: The method mainly repackages known techniques (parameter sharing, Kronecker product, low-rank adaptation) rather than introducing fundamentally new ideas.
- Limited practical advantage: The main benefit only appears in large-scale multi-task settings (>200 tasks), which isn't a common real-world scenario.
Other Comments Or Suggestions: No other suggestion.
Questions For Authors: - How does BSLoRA impact training and inference speed?
- How does BSLoRA compare to LoRA in memory consumption during training?
- The performance gains are modest, questioning whether the increased architectural complexity is worth the trade-off.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Comment:** "*Limited empirical improvements: The performance gain is marginal, making it unclear whether the complexity trade-off is justified.*"
**Answer to C1**: Thank you for your comment. Please refer to R2C4.
---
**Comment:** "*No throughput/memory efficiency evaluation: The paper claims efficiency improvements, but no evidence of speedup or memory savings is presented in the main paper.*"
**Answer to C2**: Thank you for your comment. The goal of our method is to reduce parameter redundancy through effective parameter sharing. Our experimental results demonstrate that **BSLoRA achieves better performance with fewer trainable parameters**, indicating a significant improvement in **parameter efficiency**. In the appendix, we provide a detailed comparison of the **memory usage** between standard LoRA and BSLoRA under varying numbers of concurrently deployed downstream tasks. As shown in Figure 6, BSLoRA consumes **substantially less GPU memory during inference**, enabling **more downstream tasks to be deployed simultaneously**. This highlights the practical value of our method in **multi-task inference scenarios**, especially under memory-constrained conditions.
We also test the inference speed comparison on the dataset of wikitext2 and ptb, the results are shown in https://anonymous.4open.science/r/BSLoRA-1E72/ . Results indicate that both GT and KE can achieve inference acceleration.
---
**Comment:** "*Not highly novel: The method mainly repackages known techniques (parameter sharing, Kronecker product, low-rank adaptation) rather than introducing fundamentally new ideas.*"
**Answer to C3**: Thank you for the thoughtful question. As pre-trained language models continue to scale up, even parameter-efficient fine-tuning methods like LoRA introduce a substantial number of trainable parameters. Moreover, in real-world inference settings, LoRA adapters are typically kept separate from the backbone model to support simultaneous deployment of multiple downstream tasks. This makes reducing the memory footprint of LoRA adapters a critical challenge. Existing methods such as ShareLoRA focus only on **inter-layer parameter sharing**, neglecting the **intra-layer redundancy** that also contributes significantly to parameter inefficiency. **BSLoRA addresses this gap by introducing both intra-layer and inter-layer sharing mechanisms**, aiming to further reduce redundancy and achieve comparable or even improved performance with fewer trainable parameters. To overcome the challenge of sharing parameters with incompatible shapes, we introduce **three lightweight but effective shape transformation techniques** (SS, GT, KE), enabling flexible parameter sharing across diverse model components. While each individual technique may not be novel, their **systematic combination and targeted application to large language model fine-tuning** constitutes a meaningful contribution. Looking ahead, we plan to explore even more advanced sharing strategies to further enhance parameter efficiency.
---
**Comment:** "*Limited practical advantage: The main benefit only appears in large-scale multi-task settings (>200 tasks), which isn't a common real-world scenario.*"
**Answer to C4:** Reducing the size of LoRA adapters is practically important not only in large-scale multi-task settings (e.g., >200 tasks), but also increasingly so as model sizes continue to grow. For instance, applying LoRA with rank 64 to all linear layers in LLaMA 70B consumes approximately 1.4 GB of memory, and this rises to 4.5 GB for a 405B model. When deploying more than 10 tasks concurrently—common in real-world cloud services or multi-user systems—the cumulative memory usage of LoRA adapters alone can reach 45 GB, posing a significant burden for inference servers and edge devices alike. While smaller models may require hundreds of tasks to reach memory bottlenecks, larger models hit these limits even with far fewer concurrently deployed adapters. This makes parameter reduction a critical bottleneck for scaling both in terms of deployment cost and system responsiveness. BSLoRA addresses this challenge by reducing memory usage by over 50% compared to standard LoRA, enabling significantly more efficient multi-task deployment without sacrificing performance.
---
Rebuttal Comment 1.1:
Comment: The authors' response has addressed my concerns regarding the efficiency of the method, so I have decided to raise my score.
I also noticed that some methods reduce model parameters through parameter sharing, such as Layerlink [1]. I suggest the authors consider citing this relevant work.
[1] "Layerlink: Bridging remote sensing object detection and large vision models with efficient fine-tuning." Pattern Recognition (2025): 111583.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thoughtful follow‑up and for raising your score after reading our rebuttal. We truly appreciate the time and care you invested in evaluating our work. We are also grateful for your recommendation to reference Layerlink [1], which is highly relevant to our study. We agree that citing this work will strengthen the contextualization of our method. We will add the citation and discuss its relationship to our layer‑sharing strategy in the revised manuscript. Your constructive feedback has been invaluable in improving the quality and clarity of our paper. Thank you again for your support. | Summary: The paper introduces Bi-Share LoRA (BSLoRA), which improves memory efficiency and inference speed by sharing parameters both within a layer (intra-layer) and across layers (inter-layer). The approach also introduces three shape transformation techniques—Slice Sharing, Gate Transformation, and Kronecker Extension—to ensure flexible parameter sharing across different module structures. Experimental results on LLaMA models (7B, 8B, and 13B) show that BSLoRA reduces parameters while improving or maintaining model performance on Commonsense Reasoning and MMLU benchmarks.
Claims And Evidence: The claims made in the submission supported by clear and convincing evidence.
Methods And Evaluation Criteria: The paper claims reduced memory overhead, but does not provide empirical results on: Inference latency (Does BSLoRA speed up inference compared to LoRA?)
Theoretical Claims: No explicit mathematical proofs are presented, so there are no incorrect derivations to check.
Experimental Designs Or Analyses: - Even though the authors have conducted a contribution analysis by setting the rank of one sub-LoRA matrix to 8, the settings for Table 1 and Table 2 are not consistent. The ablation study evaluates the effectiveness of the three different modules by removing each component sequentially.
- Could you validate the effectiveness of BSLoRA by comparing it with ShareLoRA at different ranks, while keeping the number of trainable parameters similar?
Supplementary Material: Yes. The appendix.
Relation To Broader Scientific Literature: The key contributions of the paper relate to prior work in parameter-efficient fine-tuning (PEFT), particularly in LoRA-based architectures. The paper builds on existing research in LoRA parameter sharing, addressing its limitations through a new bi-sharing approach that combines intra-layer and inter-layer weight sharing. While the paper contributes to efficiency, it would benefit from a more explicit connection to pruning or sparsity techniques in neural networks, as well as a deeper discussion on its applicability to broader PEFT techniques beyond LoRA .
Essential References Not Discussed: No
Other Strengths And Weaknesses: **Strengths:**
- The Bi-Share LoRA (BSLoRA) approach is interesting in how it combines intra-layer and inter-layer parameter sharing, addressing key limitations in LoRA-based fine-tuning.
- The method improves upon existing approaches such as VeRA, Tied-LoRA, and ShareLoRA, making it a significant contribution to parameter-efficient fine-tuning (PEFT).
- The paper is well-structured, with clear explanations of the methodology and mathematical formulations.
**Weaknesses:**
- The ablation study is relatively insufficient. See the section above, *"Experimental Designs or Analyses,"* and the section below, *"Questions for Authors,"* for details.
Other Comments Or Suggestions: - Could you valid the effectiveness of BSLoRA by comparing with ShareLoRA with different ranks, under the case of similar trainable parameter?
Questions For Authors: - For the hyperparameter setting, how is r selected for different reshaping methods? Why not choose a consistent r across all methods?
- The Gate Transformation (GT) method yields a rank value equivalent to the local rank plus 2, likely due to one-rank gates causing some information loss. However, the performance of GT is comparable to the other two methods (i.e., SS and KE) in Table 1 and Table 2. How can this phenomenon be explained?
- “Specifically, the local component of LoRA performs best on HellaSwag, ARC-e, BoolQ, and SIQA, while the intra-layer shared component excels on PIQA and ARC-c.” Could you provide some insight into this phenomenon?
- “The results in Figure 4(c) show that modifying the constant improves both expressiveness and information content.” However, this conclusion is not obvious from Figure 4(c). How was it derived?
- In Table 10, different rank settings achieve similar performance, and the claim that 'Assigning a lower rank to the local component and a higher rank to the shared parameters yielded better performance, further illustrating the redundancy in the standard LoRA parameters' is not obvious.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Comment:** "*... settings for Table 1 and Table 2 are not consistent. The ablation study evaluates the effectiveness of the three different modules by removing each component sequentially*."
**Answer of C1**: Thank you for your observation. The settings in Tab 1 and Tab 2 were designed to evaluate the **overall performance of BSLoRA in real-world scenarios**. Therefore, the rank settings in Tab 1 and Tab 2 were selected through extensive experiments to achieve **optimal performance** while maintaining parameter efficiency. In contrast, we chose rank=8 in the ablation study to clearly isolate each component’s contribution. These different configurations address different experimental goals.
---
**Comment:** "*Could you validate the effectiveness of BSLoRA by comparing it with ShareLoRA at different ranks, while keeping the number of trainable parameters similar?*"
**Answer to C2**: Thank you for your suggestion. We conducted additional experiments comparing **ShareLoRA at ranks 4, 8, 16, and 32** with **BSLoRA configured to have approximately same trainable parameters**. The results (https://anonymous.4open.science/r/BSLoRA-1E72/) show that at **lower budgets**, only the **KE** outperforms ShareLoRA. This suggests that under tight parameter budgets, **SS and GT have limited capacity to extract useful features**. However, as the rank increases, **all three variants consistently outperform ShareLoRA**, indicating that BSLoRA is more effective at leveraging **redundant capacity** when more parameters are available. These findings further support that there exists **significant redundancy both within and across layers**, and our sharing strategies can lead to **more efficient parameter utilization**, especially as the parameter budget increases.
---
**Comment:** "*For the hyperparameter setting, how is r selected for different reshaping methods? Why not choose a consistent r across all methods?*"
**Answer to C3**: Thank you for your insightful comment. SS, GT, and KE each have distinct structural characteristics, so they they naturally require different r configurations to achieve optimal performance. Using a single rank across all methods can be suboptimal, as each interacts with model layers differently. Instead, we determined **empirical best r values** through systematic experiments, balancing parameter efficiency and accuracy across benchmarks.
---
**Comment:** "*... the performance of GT is comparable to the other two methods (i.e., SS and KE) in Table 1 and Table 2. How can this phenomenon be explained?*"
**Answer to C4**: Your question is very interesting. One possible explanation is that GT **provides an implicit regularization effect, preventing overfitting to specific layers and encouraging a more generalizable representation**. Additionally, the learned input and output transformations may help refine the information flow, ensuring that key features are preserved while removing redundant ones.
**Comment:** '*“... local component of LoRA performs best on HellaSwag, ARC-e, BoolQ, and SIQA, while the intra-layer shared component excels on PIQA and ARC-c.” Could you provide some insight into this phenomenon?*'
**Answer to C5**: Thank you for your insight. The varying effectiveness likely arises from differing task requirements. Tasks like PIQA and ARC-c may "share similar patterns", so consistent features from shared parameters help. Meanwhile, tasks like HellaSwag and BoolQ may "demand fine-grained contextual understanding", benefiting more from local parameter usage. This underscores BSLoRA’s hybrid design in adapting to different task needs.
---
**Comment:** “*The results in Figure 4(c) show that modifying the constant improves both expressiveness and information content.” However, this conclusion is not obvious from Figure 4(c). How was it derived?*”
**Answer to C6**: Thank you for your comment. The current radar plot in Figure 4(c) does not clearly show the improvement. We inferred it by comparing **average performance** across methods. The results used in this analysis are shown in the link above. We will clarify this in the revised paper and improve either the figure or the accompanying explanation.
---
**Comment:** "*In Table 10, different rank settings achieve similar performance, and the claim that 'Assigning a lower rank to the local component and a higher rank to the shared parameters yielded better performance, further illustrating the redundancy in the standard LoRA parameters' is not obvious.*"
**Answer to C7**: Thank you for your observation. Adjusting only the local component's rank merely brings minor performance improvement or worsen, indicating low-rank local component can achieve comparable performance. Besides, keeping a low-rank local component and adjusting shared components consistently results in comparable performance. Therefore, we can assign a lower rank to the local component and high ranks to shared components to achieve better parameter efficiency. | Summary: This paper introduce a method, sharing lora parameters across local, intra-layer, inter-layer. To address the shape mismatch issues, shape transformation are introduced including slice sharding, gate transformation, KRONECKEREXTENSION. Results on different datasets show the effectiveness of the method.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No proof.
Experimental Designs Or Analyses: Partially.
**Concerns about Baseline Comparisons and Method Selection**
1. Fairness of Comparison with Baselines:
The comparison between BS LoRA and baselines such as ShareLoRA and Tied LoRA may be unfair. BS LoRA applies LoRA to both attention and MLP modules, whereas ShareLoRA and Tied LoRA are limited to attention modules only. This difference means BS LoRA updates more parameters, potentially inflating its performance relative to the baselines. To evaluate the true effectiveness of BS LoRA’s design, how does applying LoRA to both attention and MLP modules compare to applying it to only one module type (e.g., attention-only or MLP-only)? Does the combined approach yield synergistic benefits from updating both module types, or is the performance improvement merely a result of having more trainable parameters? Including ablation studies (e.g., separate versus joint tuning of attention and MLP modules) would help clarify the contribution of this design choice.
2. Rationale for Tied-LoRA Variant Selection:
The paper employs the TL5 variant of Tied-LoRA in its experiments, despite stating that the TL6 variant performs better. What motivated the choice of TL5 over TL6, and how does this decision influence the reported results? A clear explanation of this selection—or additional results using TL6—would provide greater confidence in the findings and ensure consistency between the claims and the experimental setup.
**Inconsistent Optimal Methods Across Setups**
Tab 2 results show that no single config among SS, GT, KE consistently outperforms others across tasks, model sizes, or ranks.
**The improvements compared with ShareLoRA and Tied LoRA is marginal.**
Supplementary Material: Yes. Code.
Relation To Broader Scientific Literature: The key contributions of BS LoRA are deeply tied to the broader scientific literature on parameter-efficient fine-tuning of LLMs. It builds on the foundational work of LoRA and extends the parameter-sharing ideas pioneered by ShareLoRA and Tied-LoRA. By introducing a block-wise sharing strategy, BS LoRA offers a new approach that seeks to improve upon prior findings—balancing the reduction of trainable parameters with the need for layer-specific adaptability. This work contributes to the evolving field of PEFT, where efficiency remains a critical concern for deploying LLMs in resource-constrained environments.
Essential References Not Discussed: No
Other Strengths And Weaknesses: While the paper has some weaknesses, as discussed above, it also demonstrates notable strengths that enhance its value. The writing is clear and accessible, effectively simplifying the complex topic of shape mismatching in machine learning models. Additionally, the paper proposes multiple innovative solutions to address shape mismatching, each accompanied by a balanced discussion of its advantages and disadvantages. This thorough approach highlights the authors' expertise and offers practical insights for addressing similar challenges.
Other Comments Or Suggestions: No.
Questions For Authors: Please see the above questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **Comment:** "*Fairness of Comparison with Baselines: The comparison between BSLoRA and baselines such as ShareLoRA and Tied LoRA may be unfair. ... To evaluate the true effectiveness of BS LoRA’s design, how does applying LoRA to both attention and MLP modules compare to applying it to only one module type (e.g., attention-only or MLP-only)? ....*"
**Answer to C1:** Thank you for the thoughtful suggestion. We agree that evaluating the contribution of parameter sharing across different module types is important for a fair comparison. To address this, we conducted an additional ablation study, comparing BSLoRA applied to **only the attention modules**, **only the MLP modules**, and **both modules simultaneously**.
The results show that applying BSLoRA to the **attention modules alone yields better performance** than applying it only to the MLP modules, indicating that **redundancy in attention layers is more significant**. Moreover, applying BSLoRA to **both attention and MLP modules further improves performance** over attention-only tuning. This suggests that **redundancy exists not only across layers but also across different module types within the same layer**, as discussed in Section 2.2. These findings support our core claim: combining **intra-layer and inter-layer sharing** strategies enables more effective reduction of parameter redundancy, while also contributing to better overall performance. The results are shown in https://anonymous.4open.science/r/BSLoRA-1E72/ .
---
**Comment:** "*Rationale for Tied-LoRA Variant Selection*"
**Answer to C2:** Thank you for raising this important question. After reviewing the original Tied-LoRA paper, we observed that **both TL5 and TL6 perform well** across various tasks. In our own experiments, we replicated both the TL5 and TL6 configurations. Based on the specific training setup and evaluation methods used in our study, **we found that TL5 outperformed TL6 under these conditions**. Consequently, we decided to present the results for TL5 in the main text of the paper. We will clarify this choice in the revised version of the paper and provide a detailed comparison between TL5 and TL6 in the appendix to ensure transparency and consistency in the reported findings. The results are shown in https://anonymous.4open.science/r/BSLoRA-1E72/ .
**Inconsistent Optimal Methods Across Setups** Tab 2 results show that no single config among SS, GT, KE consistently outperforms others across tasks, model sizes, or ranks.
**Answer to C3 :** Thank you for your thoughtful observation. Indeed, the optimal configuration of **SS**, **GT**, and **KE** varies across tasks, model sizes, and ranks. This variability arises due to the **task-specific nature of the benchmarks** and the **differences in model layer architectures**. Our approach aims to provide **flexibility** by offering multiple methods that can be tailored to the requirements of different tasks and model sizes.
For example, as shown in **Table 2**, the **SS method performs best** on **Llama 1-7B**, while the **KE method performs best on Llama 3-8B**. This flexibility allows users to select the method that fits their deployment context. Moreover, the performance variation across different methods is minimal, highlighting the **robustness of BSLoRA** across diverse settings.
We believe this **flexibility** is one of BSLoRA’s strengths, enabling it to be adapted for various deployment scenarios without the need for extensive hyperparameter tuning for each task.
---
**Comment:** "*The improvements compared with ShareLoRA and Tied LoRA are marginal.*"
**Answer to C4:** Thank you for your comment. While the performance gains of BSLoRA over other parameter-sharing methods such as ShareLoRA and Tied-LoRA may appear modest, our method introduces a **more fine-grained parameter sharing strategy**. As discussed in Section 2.2 of our paper, redundancy in LoRA parameters exists not only across layers but also **within the same layer across different parameter modules**. However, existing approaches focus only on inter-layer sharing for modules at the same relative position, **overlooking the redundancy among intra-layer components**.
BSLoRA addresses this limitation by jointly implementing **intra-layer and inter-layer parameter sharing**, enabling a more comprehensive reduction in trainable parameters. To overcome the challenge of sharing across weights with mismatched shapes, we further propose **three shape transformation methods** (SS, GT, KE), which allow the model to flexibly align and share information across diverse modules. This design increases the **parameter efficiency** and enhances the model’s ability to capture shared features across tasks. Our experimental results show that BSLoRA not only achieves a further reduction in parameter redundancy but also delivers **consistent performance improvements**, showing a better balance between efficiency and effectiveness than existing sharing-based methods. | Summary: In this paper, the authors proposed BSLoRA that add intro-layer and inter-layer sharing for LoRA to reduce trainable parameters while maintenance the performance. Multiple experiments were conducted on several benchmark datasets and showed slightly better performance.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: Yes, the main experimental results, analysis, and ablation study.
Supplementary Material: Yes, only code was included in the supplementary material.
Relation To Broader Scientific Literature: The introduce intra-layer and inter-layer sharing approach further reduced the trainable parameters of LoRA and could enable fine-tuning of much larger models.
Essential References Not Discussed: No, essential references are adequate.
Other Strengths And Weaknesses: ***Strengths***
1. The idea of adding intra-layer and inter-layer sharing to further reduce trainable parameters of LoRA makes sense and is technically sound to me.
2. The authors conducted extensive on experiments, analysis, and ablation study to validate the proposed approach.
3. Writing is good and easy to follow.
*** Weaknesses***
1. Given that LoRA already reduced the trainable parameters dramatically compared to full training, the additional reduction from the proposed intra-layer and inter-layer sharing is relatively small.
2. Given the complexity of hyper-parameter tuning and shape transformation (SS, GT, KE), I am not fully convinced that the gain would outweigh the effort.
Other Comments Or Suggestions: 1. Typo in the caption of Figure 1: "(left)" and "(right)" should be "(top)" and "(bottom)".
Questions For Authors: Please refer to the "Other Strengths And Weaknesses" for the detailed comments and provide additional justification regarding the balance between benefits gained and required effort.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Comment:** "*Given that LoRA already reduced the trainable parameters dramatically compared to full training, the additional reduction from the proposed intra-layer and inter-layer sharing is relatively small.*"
**Answer to C1**: Thank you for your insightful comment. While it is true that LoRA significantly reduces trainable parameters compared to full fine-tuning, the memory footprint of LoRA adapters becomes increasingly non-negligible as the size of the pre-trained model and the number of downstream tasks grow. In real-world inference systems, **LoRA adapters are typically not merged with the base model** to support dynamic multi-task serving. For instance, applying LoRA with rank 64 on LLaMA 70B introduces over 1.4 GB of additional memory. As more tasks are deployed simultaneously, e.g., **100 tasks would take about 140G memory, this overhead quickly becomes a major bottleneck.**
To address this, recent works such as Vera, ShareLoRA, and Tied-LoRA explore further parameter reduction. Building on these efforts, BSLoRA introduces **a more fine-grained parameter sharing** framework that combines local, intra-layer, and inter-layer components. To enable sharing across weights of different shapes, we also propose three shape transformation strategies (SS, GT, KE). This design not only reduces trainable parameters but also improves model performance, as demonstrated in our experiments. From our experiment's results, **BALoRA can reduce over 50% trainable parameters on average and achieve an average 1.25% performance improvement.** Importantly, BSLoRA can be seen as **a flexible superset of LoRA**, offering a tunable architecture that adapts to both performance and efficiency needs.
---
**Comment:** "*Given the complexity of hyper-parameter tuning and shape transformation (SS, GT, KE), I am not fully convinced that the gain would outweigh the effort.*"
**Answer to C2**: We appreciate your concerns regarding the complexity of hyperparameter tuning and shape transformation. In practice, **BSLoRA does not introduce additional tuning overhead compared to standard LoRA**—the only tuning required is selecting appropriate rank values for the chosen sharing strategy. Importantly, **all three proposed sharing strategies (SS, GT, KE) consistently reduce parameter count while maintaining or improving performance**, making the selection process flexible and low-effort. In our experiments, we empirically chose rank settings that best fit each sharing method, but our **ablation studies (Tables 7, 8, and 9) show that BSLoRA performs robustly across different configurations**, demonstrating that extensive hyperparameter tuning is not necessary for effective deployment. By default, we adopt the **Kronecker Extension (KE)** strategy in our setup, as it offers the best trade-off between **parameter efficiency and performance**. Overall, BSLoRA is a simple yet effective solution that achieves higher efficiency **without adding tuning complexity**.
---
**Comment**: "*Typo in the caption of Figure 1: (left) and (right) should be (top) and (bottom)*"
**Answer to C3**: Thank you for pointing this out. We will correct the caption of Figure 1 from "(left)" and "(right)" to "(top)" and "(bottom)" in the revised version.
---
Rebuttal Comment 1.1:
Comment: Really appreciate the authors' rebuttal, increasing my rating to weak accept now.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thoughtful feedback. We sincerely appreciate your insightful suggestions, which have contributed significantly to improving our work. Your support is invaluable, and we are grateful for the time and effort you have put into reviewing our submission. | null | null | null | null | null | null |
LLMs on the Line: Data Determines Loss-to-Loss Scaling Laws | Accept (poster) | Summary: The paper focuses on loss-to-loss scaling laws in large language models (LLMs), which relate losses across pretraining datasets and downstream tasks. The author finds that 1. loss-to-loss scaling consistently follows shifted power-law trends, enabling prediction of test performance from training loss, as detailed in the conclusion section. 2. Pretraining data and tokenizer are identified as the dominant factors shaping these scaling laws, with experiments showing significant impact when these are varied. 3. Architecture has limited influence and model size, context length, and optimization settings (e.g., Adam, cosine schedules) have negligible effects. 4. The authors recommend that practitioners should prioritize curating suitable pretraining datasets for optimal downstream performance, while architectures and other settings can be freely optimized for training efficiency.
## update after rebuttal
I am keeping my score since my concerns are not fully addressed.
Claims And Evidence: Yes, each claim is accompanied by empirical results.
Methods And Evaluation Criteria: Yes, the authors train a series of models to validate their scaling laws.
Theoretical Claims: The paper is empirical.
Experimental Designs Or Analyses: The experiment design is sound and the ablation study is thorough. However, the conclusions drawn are not quantitative, drawing on subjective judgments when deciding which factors affect scaling laws more.
Supplementary Material: I briefly reviewed the figures in the appendix.
Relation To Broader Scientific Literature: The paper extends recent scaling law studies by systematically exploring factors influencing loss-to-loss scaling, contributing to understanding LLM performance optimization and generalization across tasks.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. The authors conduct a comprehensive analysis of factors affecting loss-to-loss scaling.
2. The experiments are well organized and the results are clear.
Weakness:
1. The experiments do not bring to the table findings not implied by existing scaling law formulation. For instance, scaling law is a function of model size and data size. Changing the dataset would shift the scaling law by affecting the fitted parameters. Hence, there is a plethora of studies and open source efforts on producing higher quality datasets, whose scaling curve is different from the lower quality datasets (DCLM, redpajama, etc.)
2. Similarly, what the authors identified as less important is also inherently implied in existing scaling law formulation.
3. The author did not improve upon the existing scaling law's limitation, which is to connect training/validation loss to individual downstream task performance. For instance, if the authors believe that changing the architecture has a limited impact on loss-to-loss scaling laws, is it possible to show two models (one decoder-only transformer-based, one Mamba-based) with fixed recipes converge to similar downstream performance across tasks? This remains one of the most popular open questions in the field to this day.
Other Comments Or Suggestions: The paper is well written.
Questions For Authors: My questions are listed above.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Thanks for your helpful comments and feedback.
**[EDA1 - Conclusions are subjective]**: We now quantify our findings in two ways:
1. We quantify the goodness of fit of the loss-to-loss power laws as $R^2$. We show this in our revised [Fig. 1](https://ibb.co/kgpRZNZP) (note that Fig. 1 has also received other updates; please refer to our reply to Reviewer he3E). Generally, the goodness of fit is very close to $1$. We will similarly update all figures for the camera-ready.
2. We quantify the impact of different interventions as the area between fitted curves in a newly added [Tab. 2](https://ibb.co/7tjs1xL5). Pretraining data clearly has the biggest impact on the scaling laws. Please also refer to our reply to Reviewer he3E for our updated conclusion on the impact of the tokenizer.
**[Conventional vs. loss-to-loss scaling laws]**: The distinction between compute-to-loss and loss-to-loss scaling laws may not have been sufficiently clear in the text and caused some confusion; we have updated our introduction and related work sections to explain this better.
To reiterate, we focus on _loss-to-loss scaling laws_ (i.e., train-to-train, train-to-test, test-to-test), not _compute-to-loss scaling laws_ (e.g., Kaplan et al., 2020; Hoffmann et al., 2022). While compute-to-loss scaling laws are primarily used to find optimal compute budgets, loss-to-loss scaling laws can help study generalization, i.e., how performance transfers from training distributions to downstream tasks (Brandfonbrener et al., 2024; Du et al., 2025). Although compute-to-test scaling can be informative, they do not allow for connecting training loss to downstream performance, as you pointed out. This is where loss-to-loss scaling laws become a fascinating object of study, as they explicitly model how training/validation loss is converted to test loss, a proxy for test performance.
Please also see our response to reviewer fuqS on the utility of loss-to-loss scaling laws.
**[W1, W2 - Findings are implied by existing scaling law formulations]**: We respectfully disagree. As the reviewer points out correctly, conventional scaling laws (i.e., compute-to-loss scaling laws) are a function of model and data size. That formulation alone does not automatically imply that changing data _quality_ without affecting data _size_ should impact the scaling law.
Having said that, for _compute-to-loss scaling laws_, it is of course well known that data quality _does_ play a role. However, other factors like architecture, optimizer, and learning rate scheduler also matter for compute-to-loss scaling laws (Li et al., 2025). It is not at all “inherently implied in existing scaling law formulations” which factors impact scaling laws and which don’t – after all, none of these factors are quantifiable and explicitly enter the scaling law.
Now, _loss-to-loss scaling laws_ are a relatively recent object of study. Prior to our study (Takeaway 1), it was unclear whether loss-to-loss scaling laws could generally be described by shifted power laws at all, since previous studies like Brandfonbrener et al. are very limited in their settings. On top of that, it is unclear which factors, if any, play a role. Again, it is not at all “inherently implied in existing scaling law formulations” whether pretraining data (quality, not size), architecture, tokenizer, or other hyper parameters might impact the scaling law. If we were to go by the experience of compute-to-loss scaling laws, we might assume that most of these factors matter. In contrast, we find that loss-to-loss scaling laws are notably insensitive to many factors, while strongly depending on the pretraining data. As reviewer EE76 points out, “this is of broader interest to the community, in terms of advancing our understanding of the role of architecture, optimization, etc. in downstream properties”.
**[W3b - Mamba & Llama perform similarly]**: As you note, determining whether drastically different architectures like transformer-based Llama and state-space-based Mamba achieve similar downstream performance given identical training setups is a significant open question.
We have to emphasize the distinction between compute-to-loss and loss-to-loss training laws. While compute-to-loss training laws can answer whether two models converge to the same training loss with increasing compute, loss-to-loss training laws answer whether models reach the same downstream performance given a training loss.
In this sense, yes, it is precisely the case that Mamba and Llama models “converge to similar downstream performance across tasks” given they reach the same training loss. To further illustrate this, we present in [Tab. 3](https://ibb.co/tpgMmP1Z) multiple Llama and Mamba models trained under identical conditions, which show nearly indistinguishable downstream performances.
We hope this addresses your concerns and encourages you to raise your score.
(Li et al, 2025) (Mis)Fitting: A Survey of Scaling Laws | Summary: The paper studies loss-to-loss scaling laws in language models, covering both predicting the language modeling log loss across different data distributions (“train-to-train”) and predicting log loss proxies for downstream tasks performance (“train-to-test’). The main finding in this paper is that loss-to-loss scaling is insensitive to model architecture and optimization hyperparameters, mildly sensitive to the choice of tokenizer, and very sensitive to the training data distribution.
## Update after the rebuttal
The authors have adequately addressed my concerns. I therefore increase my score to 4.
Claims And Evidence: The claims of this paper are generally well supported by evidence, except for one notable issue: it is not clear if the language models studied in this paper can be called “large.” The authors train models of up to 420M parameters and supplement the analysis with additional models of undisclosed sizes. Based on the FW-Edu validation loss, none of the supplementary models are large: using their own models the authors reach a loss of about 2.9 (according to Fig. 30) while the lowest loss shown in the paper is about 2.7 (in Fig. 1), which makes me unsure if even the 1.8B parameter FineWeb-Edu ablation model by HuggingFace was included.
Clearly, truly large models (say with 7B parameters or more) were not included in this study, and I am not sure why: the web is full of scaled families of open-weights LM’s, some including intermediate checkpoints and trained on fixed datasets. These models could be downloaded, evaluated, and used to test the claims of the paper on actually large language models.
The lack of evaluation on larger models is my primary concern regarding this paper - if it is addressed I will be happy to increase my score.
Methods And Evaluation Criteria: See “Claims And Evidence” above.
Theoretical Claims: N/A.
Experimental Designs Or Analyses: See “Claims And Evidence” above.
Supplementary Material: I’ve read all the figures in the supplementary material.
Relation To Broader Scientific Literature: This paper contributes to the body of evidence that the performance of neural networks under distribution shift tends to be predictable with very few “effective robustness” interventions beyond changing the training data
Essential References Not Discussed: None to my knowledge.
Other Strengths And Weaknesses: N/A.
Other Comments Or Suggestions: 1. I had a hard time parsing what is meant by “train-train” and “train-to-test” - I suggest including a more detailed and self-contained explanation in the beginning of the revised paper.
2. When comparing different tokenizers, comparing the average negative log probability per token is not entirely appropriate, since different tokenizers have different compression efficiencies and hence represent the same data using a different number of tokens. Instead, you should normalize the negative log-probability by a tokenizer-independent quantity, such as the number of bytes of uncompressed text in the sequence, or the number of tokens used by a fixed tokenizer. It would be interesting to see if such improved reckoning moves the “lines” corresponding to different tokenizers closer.
Questions For Authors: Precisely which models were evaluated and what were the evaluation results? You should answer this by tabulating the results of your entire testbed and include it as supplementary material. See “Claims And Evidence” above for context.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank you for your helpful comments and feedback. We address each of your questions and concerns below.
**[Q1 - Models trained and evaluated and complete results]**: Thank you for noticing this; we have made amendments in multiple places: First, we now mention in Sec. 4 the size of not only our models, but also the models sourced from HuggingFace. Second, we have expanded on App. A and Tab. 1 to clarify the models we trained and created a new table for models we evaluated from HuggingFace (see [Tab. 2](https://ibb.co/xq3KgQ4N)). Third, we will release the entire data frame with all model settings and evaluation results for the roughly 6000 checkpoints and our complete code with the camera-ready version.
Here’s an abridged overview of the evaluated models:
- We trained Llama and Mamba models ranging from approximately 60M to 420M parameters from scratch. We have also added some 7B models now, see below.
- We evaluated pretrained models from HuggingFace: Pythia (70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, 12B), GPT-Neo models (125M, 1.3B, 2.7B), GPT-NeoX (20B), and GPT-J (6B), all trained on variants of The Pile. These models are included in the right-most columns of Figs. 4–6, 16–18, and 22–27.
- We also evaluated FineWeb ablation models (1.7B) from HuggingFace trained on The Pile, C4, and FineWeb-Edu. These models are also included in Figs. 4-6 (e.g. Fig. 4 column 3).
Note that although we fit scaling laws using all available checkpoints, we display a randomly selected subset in figures for readability. We have made this more evident in the text and figure captions.
**[CE1 - Training/Evaluating large models]**: As explained above, our analysis already includes publicly available pretrained models with up to 1.7B parameters for Llama, up to 20B parameters for GPT, and up to 2.7B parameters for Mamba. We apologize for this not being evident in the original manuscript. As illustrated in Figs. 4-6 (especially in the right-most column showing the largest GPT models), these models generally achieve lower loss on validation and test sets compared to our models trained from scratch, but still adhere to the power-law trends and confirm our key insights. For example, the right-most column in Fig. 6 contains all of the largest HuggingFace models and clearly demonstrates that architecture does not affect loss-to-loss scaling laws even at these scales.
Following your suggestion, we revisited available pretrained models. However, most other publicly available checkpoints of large scale models (≥7B) are unsuitable for our analysis as they are trained on (often undisclosed) data mixtures that prevent us from directly comparing them to other models (.e.g, the Falcon series).
All that said, we agree that especially the impact of the pretraining data in Fig. 4 is not conclusively shown for larger models, since the largest HuggingFace models included in our analysis were all trained on variants of The Pile (see Fig. 4, right-most column). To remedy this, we have now trained additional 7B models for 1B tokens ourselves. Due to compute and time constraints we have limited this to three settings:
- Llama-7B (tiktoken tokenizer) on FineWeb-Edu / on C4
- Llama-7B (GPT-2 tokenizer) on FineWeb-Edu
We’ve included all these models in our revised [Fig. 1](https://ibb.co/kgpRZNZP). All models follow the established power-law scaling curves and confirm our main conclusions.
**[OCS1 - Clarifying taxonomy used in paper]**: Thanks for the suggestion. We will state and clarify the taxonomy used in the paper early in the manuscript. We have also added an appendix section that explains the scaling laws in more detail, which we describe in our reply to reviewer EE76.
**[OCS2 - Normalizing tokenizer comparisons]**: Thank you for this excellent suggestion! We performed the experiment and have updated [Fig. 1](https://ibb.co/kgpRZNZP) for BPB. Indeed, after normalizing negative log-probability by bytes, the previously distinct tokenizer lines collapse onto each other. This confirms even more strongly that architecture, tokenizer, and optimization hyperparameters minimally affect loss-to-loss scaling—only pretraining data distribution truly matters. We will update other figures and sections of the manuscript accordingly.
We hope this addresses your concerns fully and encourages you to raise your score.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, and it is encouraging to see that normalizing by the tokenizer efficiency makes the choice of tokenizer even less significant.
However, my main concern is not only about the number of parameters in the model considered; it is about seeing how well the proposed scaling laws persist as we push the loss further down.
* Given a model family, I am not sure I follow why it matters if the training mixture is disclosed or not: as long as it is consistent across the models in the family, the test losses should be “on a line.”
* Moreover, some works provide model families with completely disclosed training data, and covering a larger range of compute than currently considered. One example that I’m aware of is the models released as part of “Language models scale reliably with over-training and on downstream tasks” by Gadre et al. (2024).
* I am particularly curious about the 1.7B ablation model trained on FineWeb-Edu. Is this model included in Figure 1? Could you write here its FineWeb-Edu validation loss and its HellaSwag test loss?
---
Reply to Comment 1.1.1:
Comment: Thank you for explaining your concerns in more detail. It seems we have misunderstood why you were asking about larger models. Specifically, there are two questions to consider:
1. Do interventions have the same effect (pretraining data matters, other factors mostly don’t) for larger model sizes?
2. Do larger models with lower loss still follow the power-law formulation we use?
We believe that our previous reply has shown conclusively that point 1 holds, but understand now that you’re asking about point 2, which we address below:
- **[Using different model families]**. Our argument was specific to point 1. If we want to perform targeted interventions, we need to of course be able to match the model in all but one of pretraining data, architecture, tokenizer, etc. This is why our original comparison figures only include model families that we can compare to at least one other model family. That said, you are right of course that for point 2, any model family with sufficient checkpoints can be used. We now additionally evaluate an OLMo model with up to 7B parameters that achieves low loss.
- **[Scaling law fits for larger models]**. To show that point 2 holds, we show a [new figure](https://ibb.co/93N3vtD2) containing scaling law fits for models up to 21B parameters and with overall lower loss. We will add this figure in the appendix.
The figure contains our own Llama models up to 7B parameters (green diamonds), GPT models (Pythia/GPT-NeoX) from HuggingFace up to 21B parameters (blue circles and orange squares), the HuggingFace FineWeb-Edu Ablation Model (red triangles), and the newly evaluated OLMo model (purple triangles). Note that the HuggingFace models have always been part of our analysis, as stated in our previous reply.
Kindly keep in mind that the models in this figure cannot directly be compared as in our interventional study, since they differe along multiple axes, sometimes in subtle ways. E.g., the HF ablation model (red triangles) uses a different and undisclosed version of FineWeb-Edu compared to our Lllama model (green diamonds).
Multiple of these models achieve significantly lower test loss than our own Llama models, and all models follow a power law.
- **[Low-loss regime]**. With the HuggingFace GPT models and the newly-evaluated OLMo, our analysis includes models in the sub-2.5 Loss regime on Hellaswag (sub-0.8 BPB).
We also note that we fit scaling laws following the methodology of Brandfonbrener et al. (2024). While their setting is limited in terms of architectures, tokenizers, etc., they show that loss-to-loss scaling laws are predictive to large model size / low loss. We believe that the whole of their and our results leaves little doubt that loss-to-loss scaling laws persist into the low-loss regime.
- **[Data for the 1.7B FineWeb-Edu HuggingFace ablation model]**. This model is not included in Fig. 1 as it differs from our Llama models in multiple dimensions (tokenizer, pretraining data, as explained above). It is, however, included in Fig. 4, column 3 and Fig. 5, column 2 and their variants in the appendix as “Llama, gpt2-HF, FineWeb-Edu”. Its validation and test loss is included in the complete dataframe of all checkpoints that we will release with our paper. For your convenience, we show an excerpt below.
| Steps | HellaSwag Loss | FW-Edu Loss | HellaSwag BPB | FW-Edu BPB |
|------:|---------------:|------------:|--------------:|-----------:|
| 2k | 3.30 | 3.62 | 0.96 | 0.85 |
| 10k | 2.92 | 3.26 | 0.85 | 0.77 |
| 20k | 2.84 | 3.18 | 0.82 | 0.75 |
| 30k | 2.80 | 3.14 | 0.81 | 0.74 |
| 40k | 2.77 | 3.12 | 0.80 | 0.73 |
| 50k | 2.76 | 3.10 | 0.80 | 0.73 |
| 60k | 2.74 | 3.09 | 0.79 | 0.73 |
| 70k | 2.73 | 3.07 | 0.79 | 0.72 |
| 80k | 2.72 | 3.06 | 0.79 | 0.72 |
| 90k | 2.70 | 3.05 | 0.78 | 0.72 |
| 100k | 2.69 | 3.03 | 0.78 | 0.71 |
| 110k | 2.69 | 3.02 | 0.78 | 0.71 |
| 120k | 2.67 | 3.01 | 0.77 | 0.71 |
| 130k | 2.66 | 3.00 | 0.77 | 0.71 |
| 140k | 2.66 | 2.99 | 0.77 | 0.70 |
| 150k | 2.65 | 2.98 | 0.77 | 0.70 |
| 160k | 2.65 | 2.98 | 0.77 | 0.70 |
We hope that this resolves your remaining concerns. We thank you for your valuable suggestions that led to multiple additions and improvements of the manuscript, and kindly ask that you consider adjusting your score in light of this. | Summary: This paper explores how loss-to-loss scaling laws depend on various factors in the training setting. While compute-to-loss scaling laws are often studied (i.e., training on X tokens, Y parameters will give you Z loss), there is recent interest in loss-to-loss scaling laws, which show how evaluation/training on one dataset can translate into evaluation/training on another. This paper finds that by varying the pre-training dataset, the relationship between validation loss and downstream test loss can change significantly. Varying the tokenizer can also change the scaling trend, while things like the model architecture (like Llama vs Mamba), optimization hyperparameters, context length, and model size have little impact.
Claims And Evidence: Evidence from numerous empirical runs supports the claim that data primarily determines loss-to-loss scaling, while other factors (optimization, architecture, etc.) impact this scaling less.
Methods And Evaluation Criteria: The evaluation criteria makes sense for the problem at hand.
Theoretical Claims: Not applicable.
Experimental Designs Or Analyses: Experimental design and analyses appear sound, although there could be more clarity on how the shifted power law is fit (i.e., how to compute $E_{x|p}, E_{y|p}$ and parameters $K$ and $\kappa$).
Supplementary Material: Yes, all of it.
Relation To Broader Scientific Literature: Strengths:
+ This paper discovers that loss-to-loss scaling is not significantly impacted by changes in model architecture, model size, context length, and optimization settings. It is especially interesting that the scaling is not impacted by model architecture (Llama vs Mamba), which suggests that different architectures can converge to similar representations. This is of broader interest to the community, in terms of advancing our understanding of the role of architecture, optimization, etc. in downstream properties.
+ The implications of pre-training data determining the loss-to-loss scaling laws are interesting, because many data curation strategies primarily focus on optimizing one validation dataset's loss (for example, [1]). This paper suggests that we should exercise caution in this practice---achieving X loss on just one validation dataset could mean various different performances on a downstream task. That is, one validation dataset is not comprehensive enough to determine downstream performance.
Weaknesses:
- While this paper has the above interesting implication, it feels like it falls short in understanding how pre-training data alters the loss-to-loss scaling law. That is, if there is this extra confounding factor---the pretraining dataset---in these loss-to-loss scaling laws, how do we model this factor and eliminate the confounding? My hunch is that a multi-loss to loss scaling law could result in things lying on the same line (for example, see Data Mixing Laws [2]). The paper would have broader impact if it could provide a path forward for consistent loss-to-loss scaling laws, or how to make sense of pre-training data impacting these laws.
[1] https://arxiv.org/abs/2407.01492
[2] https://arxiv.org/abs/2403.16952
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Weaknesses:
Clarity:
- It is slightly unclear how a train-to-train scaling law is defined. In Brandfonbrener (2024), there are *two* models being trained on different data with the same number of params and tokens, and that makes up the x-axis vs y-axis. However, in Figure 4 each point is derived from one model trained on a particular dataset, and evaluated on FineWeb-Edu as well as other validation domains (and averaged). I see that lines 156-160 briefly mention that each point in the paper's plots shows losses of one model; however, train-to-train and train-to-test should still be formally defined.
- The clarity of the paper could be improved by explaining what each point on the various loss-to-loss plots represents, and concretely describing the experimental setup for each type of scaling law in the Appendix.
- The clarity could also be improved by adding more description to the x and y axes in the plots. For instance, in Brandfonbrener et. al., they often write "Loss on Hellaswag (Trained on data 1)" and "data 1" is depicted by the colors and the legend. This would make it easier for the reader to understand what the different colored points mean in relation to the axes.
Significance:
- The paper needs to go one level deeper in discussing the practical applications of their findings. For instance, how can a practitioner, who is using one training dataset/one validation dataset, use the insights from this paper to better predict performance on another dataset/task? Moreover, in section 5, the paper says that the data distribution is the key for achieving a desirable loss-to-loss scaling and in turn achieve a great downstream performance---what defines "desirable" here?
Other Comments Or Suggestions: None.
Questions For Authors: 1. Can you clarify how the shifted power laws are fit? i.e., how to compute $E_{x|p}, E_{y|p}$ and parameters $K$ and $\kappa$.
2. How can we better understand the role of pre-training data in the loss-to-loss scaling law?
3. Improve clarity: can you precisely explain how train-to-train and train-to-test scaling laws were constructed?
4. Add more discussion of the practical applications of their findings - how can a practitioner use the insights from this paper? What defines a desirable loss-to-loss scaling that can be used to achieve better downstream performance?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your helpful comments and feedback. We address your concerns below:
**[Q1, Q3, OSW Clarity 1 - On the construction and fitting of scaling laws]**: We have added two Appendix sections that (1) detail the scaling law formulation and (2) explain how parameters are estimated. Here’s a slightly abridged version:
> **Scaling Law Details**
We adopt the compute-to-loss scaling law formulation from (Brandfonbrener et al.) Eq. (4):
> $$L\left(f^{(N, D)}\right) = E + \left( \left( \frac{A}{N} \right)^{\frac{\alpha}{\beta}} + \frac{B}{D} \right)^\beta,$$
> where $f^{(N, D)}$ is a model with $N$ parameters trained on $D$ tokens and $E, A, B, \alpha, \beta$ are parameters to be fit.
> Notably, the irreducible error $E$ captures the minimum loss possible for model $f$ in the limit of infinite model and data size.
>
> By default, this is fit using the training or validation loss.
> However, as demonstrated by (Brandfonbrener et al.) and our experiments, we can alternatively predict the loss $L_x$ on dataset $\mathcal D_x$ achieved by model $f_p^{(N, D)}$ trained on the pretraining set $\mathcal D_p$:
> $$L_x\left(f_p^{(N, D)}\right) = E_{x|p} + \left( \left( \frac{A}{N} \right)^{\frac{\alpha}{\beta}} + \frac{B}{D} \right)^\beta.$$
> As in (Brandfonbrener et al.) Eq. (7), the irreducible error $E_{x|p} = L_x(f_p^*)$ then captures the minimum possible loss on $\mathcal D_x$ of a model trained on $\mathcal D_p$.
>
> With that, we can formulate the loss-to-loss scaling law for arbitrary combinations of pretraining data and two test or validation sets, as stated in Eq. (1).
> **Fitting Details**
> For each line in a plot corresponding to a loss-to-loss scaling law from Eq. (1), we first fit the two compute-to-loss scaling laws $L_x(f_p^{(N, D)})$ and $L_y(f_p^{(N, D)})$.
> This yields estimates for the irreducible errors $E_{x|p}, E_{y|p}$, which correspond to the minimum x- and y-value of the loss-to-loss line.
> We use SciPy's default `curve_fit` optimizer for fitting.
> In rare cases when all checkpoints have the same number of parameters $N$ or same number of tokens $D$ (this is the case only for a small subset of the HuggingFace models) and a compute-to-loss scaling law cannot be fitted, we instead estimate the irreducible error as the minimum loss achieved: $E_{x|p} = \min_{N,D} L_x\left(f_p^{(N, D)}\right)$.
> With $E_{x|p}, E_{y|p}$ from the compute-to-loss fits, we again use SciPy's `curve_fit` to fit $K, \kappa$ for the loss-to-loss scaling law from Eq. (1).
We add examples of compute-to-loss fits for some of the loss-to-loss scaling laws from Fig. 2 in this section; see new [Fig. 10](https://ibb.co/SqkwfwF). Note also that while Figs. 4-6 show averaged eval/validation performance, Figs. 22-27 in App. F includes results for specific validation and evaluation datasets (C4, The Pile UC, ARC-Easy, and HellaSwag).
**[OSW Clarity 1a - More details on experimental setups]**: We have updated all the figure captions, Tab. 1, and [Tab. 2](https://ibb.co/xq3KgQ4N) in the appendix to include more details for the models used.
**[OSW Clarity 1b - Clearer axes labels]**: Thank you for pointing this out. In our case, plot colors correspond to different dimensions in different plots. In Fig. 1, the x- and y-axis correspond to two specific datasets, and colors denote different interventions. In Fig. 2, the x-axis is fixed, and colors denote the dataset of the y-axis (most similar to Brandfonbrener). In Figs. 4-6, axes are again fixed (and y-axis reports an average over multiple datasets), and colors denote different values for the specific intervention. In all cases, the training set is specified in the caption. We have now updated the figure captions and legend layout to clarify this.
**[W1 and Q2 - Modeling the influence of pretraining data]**: Quantitatively modeling the influence of data distributions is an intriguing open problem. The central issue is the difficulty in mapping differences in data distributions onto a simple feature space (e.g., scalar or vector values). In Data Mixing Laws [2], the mixing ratio serves this purpose effectively. However, we face the additional complexity of quantitatively comparing disjoint pretraining distributions with unknown compositions, a challenge beyond the scope of this work. We also do not believe multi-loss-to-loss scaling laws offer a satisfying solution. Without a reliable way to quantify diverse large-scale distributions and their relationships, we would need separate multivariate power laws for each combination of losses, again without a direct means for meaningful comparison.
**[OSW2 and Q4 - Practical utility of the paper]**: We refer you to our response to Reviewer fuqS (section: **[W1 & Q2b - On the utility of loss-to-loss scaling curves]**) for an abridged version.
We hope this addresses your concerns and encourages you to raise your score. | Summary: The paper investigates how loss-to-loss scaling (i.e. scaling laws between losses on different datasets) for LLMs is influenced by model architecture, tokenizer, and training datasets. The authors experimentally find that:
1. loss-to-loss scaling consistently follows shifted power laws.
2. The effects of pretraining data are more pronounced than the effects of model architecture and HPs.
## update after rebuttal"
I will keep the current score.
Claims And Evidence: The claims are supported by evidence.
Methods And Evaluation Criteria: See the strengths and Weaknesses section.
Theoretical Claims: na
Experimental Designs Or Analyses: Experiments are sound
Supplementary Material: Model architecture
Relation To Broader Scientific Literature: na
Essential References Not Discussed: The paper discusses how the final loss depends upon pretraining data among other things. The recent paper "Scaling Optimal LR Across Token Horizons" also discusses how pretraining data will influence scaling properties.
Other Strengths And Weaknesses: Strengths:
1. The paper is very clearly written, and the claims it makes are well supported.
2. Scaling laws are impactful.
3. Identifying what interventions are effective for LLM training can save lots of compute.
Weaknesses.
1. The practical utility of the paper is limited. The authors demonstrated that training data has a larger effect on the scaling laws than model architecture. An alternative way to conclude that pretraining data is more important than architecture is to just use e.g. MMLU numbers. It is not clear how the methodology the authors propose would be better than this baseline approach to determine what interventions are effective and which are not.
Other Comments Or Suggestions: na
Questions For Authors: 1. Could you discuss more in detail how the irreducible errors are estimated?
2. Could you somehow quantify how your approach is better at identifying what interventions are effective than a baseline approach which would just use e.g. MMLU numbers as a quality metric?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank you for the helpful feedback. We address your concerns as follows:
**[W1 & Q2a - On comparing raw numbers]**: Comparing individual performance metrics like MMLU is indeed possible and can illustrate the effectiveness of an intervention for a specific model scale and setting. For example, we added [Table 3](https://ibb.co/tpgMmP1Z) to the appendix, which shows the raw numbers for MMLU and several other tasks. Note that Figs. 22-27 in App. F already shows versions of Figs. 4-6 for specific validation and evaluation datasets (C4, The Pile UC, ARC-Easy, and HellaSwag).
That said, comparisons on a single model scale and setting or on a single test set fall short in understanding whether the effectiveness of an intervention is dependent on model size, dataset size, downstream task or other factors. Additionally, evaluations on a single test set like MMLU are noisy, as evident from [Fig. 10](https://ibb.co/Rpn1xJRX), [Fig. 11](https://ibb.co/WNkR7SV3), [Fig. 12](https://ibb.co/jvPSBB2N), [Fig. 13](https://ibb.co/gZmwBTz0), [Fig. 14](https://ibb.co/sdJ3nLWz), and [Fig. 15](https://ibb.co/Zpn5c6k8), which we have added to the appendix. Our comprehensive study addresses this by systematically evaluating performance across multiple scales, factors, and downstream tasks while rigorously controlling all training parameters (including learning rate, optimizer, context length, tokenizer, model and dataset sizes).
**[W1 & Q2b On the utility of loss-to-loss scaling curves]**: Loss-to-loss scaling laws across datasets provide critical insights beyond single-setting performance comparisons, as detailed in our related work and discussion sections. We reiterate the most important points here and have updated the corresponding sections in the paper to state this more clearly:
- **Generalization**: Loss-to-loss scaling laws (train-to-train, train-to-test, or test-to-test) provide insight into how performance transfers across datasets (Taori et al., 2020, Fang et al., 2022, Brandfonbrener et al., 2024). Specifically, these scaling laws help answer the generalization question: if a model achieves a certain performance on a given dataset (generally training dataset), how well does it do on another task/dataset? As **Reviewer EE76** rightly points out, this can be of “broader interest to the community, in terms of advancing our understanding of the role of architecture, optimization, etc. in downstream properties.”
- **Compute Budget Translation**: By combining train-to-test scaling laws with compute-to-train scaling laws, we can more precisely understand how compute budget translates into downstream task performance (Brandfonbrener et al., 2024), and also help uncover emergent model abilities (Du et al., 2025).
- **Factor Decomposition**: Separately analyzing compute-to-train and train-to-downstream scaling laws helps isolate factors influencing each step. For instance, in our work, we identify that **architecture and optimizer settings do not influence loss-to-loss scaling laws — but they do affect compute-to-train scaling laws** (Kaplan et al., 2020; Hoffmann et al., 2022; Brandfonbrener et al., 2024; Porian et al., 2024; Li et al., 2025). As a result, architectures and optimizers can be independently optimized to enhance compute scaling without negatively impacting downstream task performance.
- **Limitations of Single-Dataset Validation**: As **Reviewer EE76** highlights, our findings caution against relying solely on one validation dataset's loss (as typical in data curation methods, e.g., (Liu et al., 2024)) — achieving a particular loss on one validation set can correspond to varying downstream task performances. Thus, a single validation dataset may not comprehensively indicate downstream efficacy.
By illuminating the factors that consistently impact loss-to-loss scaling laws, we enable practitioners and researchers to use them as a tool for analyzing and optimizing model training.
**[Q1 - On the irreducible errors]**: We have added two Appendix sections that detail the scaling law formulation and explain how parameters are estimated; please refer to reply to **Reviewer EE76** for more details.
**[Regarding the referenced paper]**: We are happy to include the suggested reference in our camera-ready version. As you rightly point out, the impact of pretraining data on loss-to-loss scaling laws has been shown before (Taori et al., 2020, Fang et al., 2022, Brandfonbrener et al., 2024), and our results further confirm this finding. However, to the best of our knowledge, our study is the first to show this comprehensively for a large number of datasets and model settings and is the **first to analyze the impact of other factors like architecture, optimizer settings, and context length**.
We hope this addresses your concerns fully and encourages you to raise your score.
(Li et al, 2025) (Mis)Fitting: A Survey of Scaling Laws | null | null | null | null | null | null |
Attention-Only Transformers via Unrolled Subspace Denoising | Accept (poster) | Summary: This paper presents a new transformer architecture by interpreting each layer as a subspace denoising operator for the token representations. The work is interesting, and the paper is well written in general. The authors also provide theoretical analysis on the developed model. Experimental results show that the proposed method can yield comparable performance in different applications.
Claims And Evidence: The experiments support the claims in the paper.
Methods And Evaluation Criteria: The proposed methods and evalutions make sense in the paper.
Theoretical Claims: The authors provide proofs to some of the theories.
Experimental Designs Or Analyses: The experiments are sufficient to support the claims in the paper. However, it could further benefit from comparison with more recent SOTA transformer-based methods.
Supplementary Material: The authors provide a proof for Theorem 3.1.
Relation To Broader Scientific Literature: The idea of interpreting each layer of transformer as a subspace denoising operator for token representations is interesting, which facilitates the understanding of the commonly used transformer network.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths: 1) Very good presentation of the work and the motivation is clear. 2) interesting findings to interpret a layer of transformer as a subspace denoising operator for token representation. 3) the work is theoretically founded and empirically verified in different tasks.
Weaknesses: 1) More fair discussions should be given in the experiments. For instance, the results in Table 1 show that the proposed method is 9.3% lower than the compared baseline. Using "comparable performance" is unfair.
2) The advantage of the proposed method might be strengthened by comparing with more transformer-based methods.
Other Comments Or Suggestions: N/A
Questions For Authors: See the weaknesses above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > **Q1.** More fair discussions should be given in the experiments. For instance, the results in Table 1 show that the proposed method is 9.3% lower than the compared baseline. Using "comparable performance" is unfair.
**A1.** Thanks for the comment. To address the concern, we will change "comparable performance" to ''worse performance'' and revise the manuscript to provide a more accurate description of the results.
Additionally, after a more careful examination, we found that the CRATE baseline we compared against uses a patch size of 8, while our implementation uses a patch size of 16, making the comparison unfair. A smaller patch size results in more image tokens, which usually leads to better performance. We are currently training models with a patch size of 8 and will report the result shortly.
> **Q2.** The advantage of the proposed method might be strengthened by comparing it with more transformer-based methods.
**A2.** Thanks for the suggestion. As suggested, we first compare AoT-MHSA to Vision Transformer (ViT) [1] on the ImageNet. Due to the time limitation, we train the model on ImageNet-1K from scratch and report the results as follows:
| Model | # of Parameters | Accuracy |
| -------- | -------- | -------- |
| AoT-MHSA | 15M | 69.5% |
| ViT Small | 22M | 72.4% |
Next, we compare AoT-MSSA to Llama [2] architecture on the in-context learning task (Section 4.2.2) and report the result at the following anonymous link: https://postimg.cc/gallery/d0Ypwvc.
Based on the above experimental results, it is observed that the proposed architecture demonstrates performance that is comparable to, or slightly lower than, that of state-of-the-art transformers.
*[1] Dosovitskiy, Alexey, et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on Learning Representations, 2021.*
*[2] Touvron, Hugo, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.* | Summary: The authors propose an attention-only architecture, using the multi-head subspace self-attention (MSSA), first proposed by Yu et al., NeurIPS 2023 ( and also JMLR 2024). They have a model in which the embeddings of the tokens come from a mixture of different subspaces, albeit with additional additive noise making each token slightly away from its natural subspace. The purpose of the network is to denoise these vectors at each layer, increasing the signal to noise ratio as one goes from layer to layer. They view their architecture as a special case of a multi-head attention model.
The authors provide some theoretical analysis assuming a particular model for $Z^{(l)}$. They try this architecture for vision transformer for ImageNet top-1 accuracy and then also for language models doing next token prediction. The performances are not meant to be competitive with the best engineered transformer models, but are still nontrivial. They also do some in-context learning for linear models. The number of parameters are only in tens of millions as opposed to billions.
## update after rebuttal
I am satisfied with the authors' clarifications. I am keeping the same score.
Claims And Evidence: I take that the authors' main claim is that transformers can act as denoisers which ultimately provide a compressed representation allowing easy training for downstream tasks. I think they have shown some evidence for this claim. They also claim the architecture is interpretable, and show that different heads have some semantic interpretability (Fig. 7, Appendix D).
Methods And Evaluation Criteria: Once more, the benchmarks and the datasets are less demanding that state-of-the-art transformers.
Theoretical Claims: I have not checked the proofs in Appendix B in detail but got a sense of the techniques involved and the general flow of the argument.
Experimental Designs Or Analyses: Experimental design and analysis seems sufficient.
Supplementary Material: Sections B (proofs) and D (Semantics).
Relation To Broader Scientific Literature: How transformers form such a powerful representation of sequence is something of a mystery. The authors are advancing a view of transformers as a signal denoiser via something like a subspace clustering. This MLP layer free transformer is easier to analyze a provide some credence for this view. Of course, its performance is going to be far from satisfactory. We have to see if explainability/interpretability compensates for the loss of performance.
Essential References Not Discussed: I am not aware of such references.
Other Strengths And Weaknesses: There are some notations in the paper that I think would confuse the first time reader. $Z^{(l)}, f^{l}$ have layer superscript $l$ in the equations but $U_k$ never does, giving the impression that $U_k$ are the same for each layer. Figure 3 architecture has $U^{l}$ and so does Yu et al, 2023. Unfortunately, Figure 2 does the conceptual explaining with $U_k$'s apparently the same for each layer. Also, layer norm appears in Fig. 3 but does not get mentioned much in the model or the analysis.
Other Comments Or Suggestions: None
Questions For Authors: I would prefer a clearer presentation of the architecture, and not having to rely on the previous papers for disambiguation.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > **Q1.** **Inconsistent notation**: $Z^{(l)}$, $f^l$ have layer superscript $l$ in the equation but $U_k$ never does, giving the impression that $U_k$ are the same for each layer. Figure 3 architecture has $U^l$ and so does Yu et al., 2023.
**A1.** Thanks for pointing this out. This confusion stems from a gap between the theoretical construction of transformers under Definition 2.1 and their practical implementation for real-world data. Specifically, our approach constructs a transformer in a forward manner by interpreting each layer as a subspace denoising operator, assuming that the initial token representations satisfy Definition 2.1. In this theoretical setting, it suffices to consider $U_k^{(l)} = U_k$ across layers. However, in practical scenarios with real-world data, these subspace matrices $\{U_k^{(l)}\}$ need to be learned gradually via backpropagation and may be different across layers. We will include the above discussion and the following equation in Section 3.1 to clarify this point:
$$
\mathrm{Z}^{(l+1)} = \mathrm{Z}^{(l)} + \eta \sum_{k=1}^K \mathrm{U}_k^{(l)} \mathrm{U}_k^{(l)^T} \mathrm{Z}^{(l)}\varphi\left( \mathrm{Z}^{(l)^T}\mathrm{U}_k^{(l)}\mathrm{U}_k^{(l)^T}\mathrm{Z}^{(l)}\right)
$$
> **Q2.** **Layer norm:** Layer norm appears in Fig. 3 but does not get mentioned much in the model or the analysis.
**A2.** Yes, layer normalization indeed appears in Fig. 3 as part of the practical implementation of the transformer architecture. Our theoretical framework primarily focuses on the core components of transformers, namely self-attention and skip connections, as denoising token representations can be achieved without layer normalization when the initial token representations satisfy Definition 2.1. However, in practice, layer normalization is necessary to stabilize training and improve convergence. We will include a brief discussion on the role of layer norm in the revised version to clarify its inclusion in the architecture.
> **Q3.** I would prefer a clearer presentation of the architecture and not having to rely on the previous papers for disambiguation.
**A3.** Thanks for the valuable comment. To address this, we will include a more detailed and explicit description of the architecture in the revised manuscript to reduce the dependency on previous papers such as Yu et al. (2023a, b).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their clarificatory comments. I would like to keep the same score.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the reviewer's acknowledgment of our clarifications. We believe the improvements presented in the rebuttal further enhance the clarity of the paper! | Summary: This paper proposes an attention-only transformer (AoT) architecture that eliminates the feed-forward network (FFN) modules found in traditional transformers, including CRAFT's Multi-head Subspace Self-Attention (MSSA). The authors argue that representation learning should compress noisy token representations toward a mixture of low-dimensional subspaces, and that multi-head self-attention naturally implements this denoising operation.
Claims And Evidence: ## Strengths:
1. The paper is well-written. Figures are clear to understand the paper.
2. The theoretical formulation is clear, particularly regarding the concept of a union of (possibly many) low-dimensional subspaces.
3. The paper provides a solid mathematical foundation for its claims.
## Weaknesses:
1. There appears to be an inconsistency between theory and implementation. According to the parameter calculations, MSSA ultimately employs the same methodology as MHSA, where all projection matrices U_o, U_q, U_k, and U_v are not shared. This contradicts the theoretical framework presented, suggesting a mismatch between the proposed theory and actual implementation. I am not sure about this point, I do not check the authors' code.
2. The experimental results in Table 1 indicate that AoT's performance is significantly suboptimal. On ImageNet, improvements of 0.2 or more are typically considered statistically significant, with 20M-30M models usually achieving accuracy between 80-84%. However, AoT only reaches approximately 70%, making it difficult to consider this approach effective.
3. The claims regarding experimental results are problematic. In nanogpt experiments, according some previous top-conference papers, improvements of 0.02-0.03 in validation loss are typically considered meaningful. While increasing network parameters generally leads to lower validation loss, the paper shows that under comparable parameter conditions, AoT's loss is approximately 0.1 worse than the baseline, yet the authors claim comparable performance, not fair claim. Even more concerning, AoT with 180M parameters performs worse than the baseline (124M), strongly suggesting that AoT is an ineffective or significantly underperforming approach.
4. There is a dimensional inconsistency in Equation 5. Since head_k is d_h by n, head_k^T should be n by d_h, but the matrix dimensions do not align properly in the equation.
5. The mathematical foundations of the paper are difficult to fully verify as they require extensive knowledge of high-dimensional probability theory and considerable time investment. Upon brief examination of Lemma B.1, potential errors were identified. For example, when t=0, delta=1, and d=64, the lemma yields an impossible probability value. This suggests that additional constraints on t or other parameters may be necessary for the lemma to hold true.
Methods And Evaluation Criteria: Yes, it makes sense
Theoretical Claims: I partly check its proofs, I mention some problems.
Experimental Designs Or Analyses: yes, it is ok, but the experimental results are not good enough.
Supplementary Material: Yes, but not totally.
Relation To Broader Scientific Literature: It provides a new perspective of understanding self-attention.
Essential References Not Discussed: references are ok.
Other Strengths And Weaknesses: see above
Other Comments Or Suggestions: see above
Questions For Authors: see above
Ethical Review Concerns: no ethical concerns
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > **Weakness 1.** **Inconsistency between theory and implementation**: According to the parameter calculations, MSSA ultimately employs the same methodology as MHSA, where all projection matrices $U_o$, $U_q$, $U_k$, and $U_v$ are not shared...
**A1.** We should clarify that the theory and implementation are consistent. Specifically, we have implemented transformers using both MSSA (see AoT-MSSA in Section 4.1) and MHSA (see AoT-MHSA in Section 4.2). Our implementation of AoT-MSSA strictly follows the theoretical framework presented in the paper, where the projection matrices are designed according to the structured attention mechanism.
> **Weakness 2.** **Suboptimal experimental results**: The experimental results in Table 1 indicate that AoT's performance is significantly suboptimal.
**A2.** Thanks for the comment. After a more careful examination, we found that the CRATE baseline we compared against uses a patch size of 8 in Table 1, while our implementation uses a patch size of 16, making the comparison unfair. A smaller patch size results in more image tokens, which usually leads to better performance. We are currently training models with a patch size of 8 and will report the result shortly.
Moreover, we compare AoT-MHSA to Vision Transformer (ViT) [1] on the ImageNet. We train the model on ImageNet-1K from scratch and report the results as follows:
| Model | # of Parameters | Accuracy |
| -------- | -------- | -------- |
| AoT-MHSA | 15M | 69.5% |
| ViT Small | 22M | 72.4% |
We believe that with further tuning, we can narrow the performance gap even more.
*[1] Dosovitskiy, Alexey, et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on Learning Representations, 2021.*
> **Weakness 3.** **Problematic claims on the experimental results:** In nanogpt experiments, according to some previous top-conference papers, improvements of 0.02-0.03 in validation loss are typically considered meaningful...
**A3.** Thank you for the feedback. We have retrained AoT-MHSA with additional hyperparameter tuning and present the updated experimental results below:
| Model (# of Parameters) | LAMBADA (val loss) | PTB (val loss) | WikiText (val loss) | LAMBADA (acc) | CBT CN (acc)| CBT NE (acc) | OWT (val loss) |
| -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |
| AoT-MHSA (122M) | 4.42 | 5.52 | 4.19 | 0.38 | 0.86 | 0.82 |2.92 |
| GPT2 (124M) | 4.32 | 5.75 | 4.13 | 0.4 | 0.87 | 0.84 | 2.86 |
We acknowledge that our method is still underperforming. However, compared to Table 2, the performance of the proposed transformer has shown improvement than before, achieving a smaller validation loss. Here, its full potential has not been fully explored due to limited computing resources. With further hyperparameter tuning, we believe that performance can be further improved. Additionally, the primary focus of this paper is on the theoretical contributions rather than empirical performance.
> **Weakness 4.** **Inconsistency in Eq. (5):** There is a dimensional inconsistency in Equation 5. Since $\mathrm{head}_k$ is d_h by n, $\mathrm{head}_k^T$ should be n by d_h, but the matrix dimensions do not align properly in the equation.
**A4.** Thanks for catching this error. To fix it, we revise Eq. (5) into
$$
\mathrm{MHSA}(Z) = W_O \begin{bmatrix}
\mathrm{head}_1 \\\\ \dots \\\\ \mathrm{head}_K
\end{bmatrix}.
$$
Here, $Z \in \mathbb{R}^{d\times n}$, $W^O \in \mathbb{R}^{d\times Kd_h}$, and $\begin{bmatrix}
\mathrm{head}_1 \\\\ \dots \\\\ \mathrm{head}_K
\end{bmatrix} \in \mathbb{R}^{Kd_h \times n}$ due to each head $ \mathrm{head}_k \in \mathbb{R}^{d_h\times n}$.
> **Weakness 5.** **Difficult mathematical theory**: Upon brief examination of Lemma B.1, potential errors were identified. For example, when $t=0$, $\delta=1$, and $d=64$, the lemma yields an impossible probability value. This suggests that additional constraints on t or other parameters may be necessary for the lemma to hold true.
**A5.** Thanks for the comment. We will provide additional explanations and clarification to facilitate understanding in the revised manuscript. The result in Lemma B.1 is a standard concentration inequality for Gaussian random vectors; see Theorem 5.6 & Example 5.7 in Ref [2].
It is important to note that the choice of $t \ge 0$ is crucial for the validity of the inequality. Setting t=0 would result in a trivial statement, as the inequality is designed to quantify significant deviations from the mean. Therefore, only when the deviation is sufficiently large does the inequality yield a meaningful result. For example, when $\delta=1$ and we set $t=2\sqrt{\log d}$, then it holds with probability at least $1-2/d^2$ that $||x-\sqrt{d}|| \le \sqrt{2\log d}+2$.
*[2] Boucheron et al. (2013). Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press.* | null | null | null | null | null | null | null | null |
Learngene Tells You How to Customize: Task-Aware Parameter Initialization at Flexible Scales | Accept (poster) | Summary: This paper proposes a novel parameter initialization method called TAL, aiming to enhance the initialization effect of models for different tasks. Building upon the previous GHN and Learngene frameworks, TAL addresses their limitations. Although the GHN method is effective, it performs inadequately when dealing with large-scale models and requires retraining for each new task. In contrast, by integrating the Learngene framework with a task-aware mechanism, TAL is able to share knowledge across multiple tasks, thereby improving the accuracy of parameter initialization and eliminating the need for separate training for each task. The experimental results demonstrate that TAL outperforms methods such as GHN and LoGAH in both visual and natural language processing tasks.
Claims And Evidence: The argument of this paper is that the latest graph hypernetwork methods, such as LoGAH, exhibit poor initialization accuracy on IN-1K and CIFAR-100 when initializing larger models. Furthermore, the author presents the accuracy on IN-1K and other datasets in the experimental section, reasonably addressing the proposed issues.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are reasonable for the current problem.
Theoretical Claims: The theoretical basis is the learngene. Additionally, the concept of the computational graph is also mentioned. It is necessary for the author to elaborate on these concepts in more detail.
Experimental Designs Or Analyses: The experimental design is relatively reasonable, covering not only diverse visual tasks but also language tasks, which demonstrates the certain universality of the method proposed in this paper.
Supplementary Material: The author has not uploaded supplementary materials.
Relation To Broader Scientific Literature: Previous methods, such as LoGAH, show poor initialization accuracy on IN-1K and CIFAR-100 when initializing larger models. The TAL proposed by the author performs better on datasets like IN-1K.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The experiments are sufficient, but the analysis of the experiments is rather hasty. For example, why does TAL+ perform worse than TAL in some cases (the second row of 12-Tiny in Table 1)?
Other Comments Or Suggestions: 1 In Formula 6, Laux needs to be further explained.
2 It is recommended that the author optimize the logical structure of the text and provide a clearer elaboration of the two core concepts, learngene and computational graph.
Questions For Authors: 1 What is the difference between the Decoder and Encoder in Figure 2?
2 Why does TAL+ perform worse than TAL in some cases, such as the second row of 12-Tiny and the second row of 12-Small in Table 1? Additionally, why does TAL+ far outperform the other three methods in Table 1? And why does TAL+ perform much worse than TAL on AIRC. and OGLE in Table 2?
3 How much additional overhead does the two-stage training (Figure 2ab) incur compared to other comparative methods?
4 How is an entire ViT initialized? Is it initialized layer by layer or are the parameters of the entire network initialized directly?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank you for your reviews and address your concerns as follows.
### Q1
The author should elaborate on the learngene theory and the computational graph in more detail.
### A1
We elaborate on the concept of Learngene in the introduction and related work of the paper. The essence of the Learngene framework lies in condensing critical knowledge from an ancestry model to initialize downstream models. Our implementation uses an encoder-decoder architecture to inherit knowledge from the ancestry model and generate downstream models.
Regarding computational graphs, we provide a detailed description in lines 155-164 of the paper and add relevant examples in the appendix to enhance the reader's understanding of the concept. The computation graphs primarily include node\_feat, node\_info, and edges\_embedding. To clarify, we provide an example using the ViT model to illustrate how computation graphs are constructed.
```python
node\_feat:
tensor([
[9], # 'input'
[13], # 'pos\_enc'
[5], # 'msa'
[12], # 'ln'
[3], # 'linear'
[7], # 'sum'
[3], # 'linear'
])
node\_info:
[
[[0, 'input', 'input', None, False, False]],
[[1, 'pos\_enc', 'pos\_enc', (1, 768, 14, 14), False, False]],
[[2, 'msa', 'msa', (1, 768, 14, 14), False, False]],
[[3, 'ln', 'ln', (1, 768), False, False]],
[[4, 'linear', 'linear', (768, 3072), False, False]],
[[5, 'sum', 'sum', None, False, False]],
[[6, 'linear', 'linear', (3072, 768), False, False]],
]
edges embedding:
tensor([
[0, 1, 1], # input -> pos\_enc
[1, 2, 1], # pos\_enc -> msa
[2, 3, 1], # msa -> ln
[3, 4, 1], # ln -> linear
[4, 5, 1], # linear -> sum
[5, 6, 1], # sum -> linear
])
```
Hopefully, this simple example helps clarify the concept.
### Q2
Why does TAL+ perform worse than TAL in some cases.
### A2
The primary distinction between TAL and TAL+ lies in the size of the model library used during training. TAL+ leverages a larger model library, which generally improves performance, as evidenced by its superior results in 7 out of 9 tasks in Decathlon datasets. However, in certain cases, such as the second row of 12-Tiny and 12-Small in Table 1, TAL+ underperforms TAL. This discrepancy arises from differences in sample distribution—since TAL+ introduces more large models into training, the proportion of Tiny-sized samples decreases significantly. This imbalance affects TAL+’s ability to adapt to smaller models.
In Table 2, TAL+ performs worse than TAL on Airc and OGle. Both datasets have high class diversity but limited samples per class, with OGle being a few-shot task comprising 1623 unique characters from 50 alphabets. The key factor here is model capacity—TAL+ models trained on a larger model library tend to have more parameters, which can be less effective in scenarios with limited training samples. The increased capacity of TAL+ models may hinder generalization on these high-diversity, low-resource classification tasks. Despite these limitations, TAL+ is capable of handling larger models and achieves superior performance on most tasks.
### Q3
In Formula 6, Laux needs to be further explained.
### A3
The specific expression and calculation of Laux in Formula 6 is described in detail in Formula 4. Briefly, we use the output of the ancestry model as a soft label to align the model's output initalized by TAL.
### Q4
What is the difference between the Decoder and Encoder in Figure 2?
### A4
We introduce Encoder and Decoder in section4 of the paper. The Encoder learns and characterizes the model's computational graph, capturing its structure and parameter relationships. It adopts a Transformer architecture to efficiently handle long-range dependencies in graph structures.
The Decoder decodes the learned graph representations to generate the parameter matrices for target models. These matrices are then trimmed or copied to match the required model shape. The Decoder uses MLPs to convert graph representations into parameter values.
### Q5
How much additional overhead does the two-stage training incur compared to other methods?
### A5
Table 4 shows the computational overhead across methods, with TAL's total time of 36.19 hours including both training stages (28.47 hours for stage 1 and 7.72 hours for stage 2). TAL requires fewer training epochs in stage 2 than comparison methods, resulting in less total training time.
### Q6
How is an entire ViT initialized?
### A6
Our approach initializes the ViT model layer by layer. During the TAL forward pass, the encoder processes the computational graph to learn node embeddings, then the decoder generates parameters through internal loops where the decoder is called multiple times for different parameter groups of different layers (attention weights, MLP weights, etc.). Despite being sequential, this initialization process completes in under 0.1 seconds for an entire ViT model, preserving architectural information while maintaining efficiency.
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's reply. However, the author has not addressed my core concerns.
For example, in A2, the author did not provide a detailed response regarding the discrepancies on the Airc dataset. Additionally, when comparing Tables 2 and 3, taking the 12-small, C100, and SVHN datasets as examples (the number of samples in the two datasets is similar), before training, TAL+ performed better than TAL on C100 and SVHN. But after training, TAL did not surpass TAL+ on C100 while it did surpass TAL+ on SVHN. Why is this the case?
I hope the author can conduct further analysis and provide explanations for the results in the experiments.
---
Reply to Comment 1.1.1:
Comment: We thank you for your reviews and address your concerns as follows.
Aircraft (Airc.) contains 100 images for each of 100 different aircraft model variants, such as the Boeing 737-400 and the Airbus A310. This dataset's limited size makes it insufficient for fully leveraging the capabilities of ViT models, which typically require larger training sets.
For further analysis of CIFAR100 and SVHN results:
# Performance Comparison
We set up different seeds and employed the TAL and TAL+ methods to initialize ViT-tiny and ViT-small models for the CIFAR100 and SVHN datasets, respectively. After initialization, all models were trained for 100 epochs. The results from multiple rounds of testing are presented below.
**Table 1: Performance comparison of TAL+ and TAL methods across three rounds for ViT-tiny and ViT-small models on SVHN.**
| Method | Round 1 | Round 2 | Round 3 |
|--------|---------|---------|---------|
| **ViT-tiny** |
| TAL+ | 91.07 | 90.86 | 90.94 |
| TAL | 90.76 | 91.03 | 90.31 |
| **ViT-small** |
| TAL+ | 91.97 | 91.57 | 91.72 |
| TAL | 91.64 | 91.95 | 92.02 |
**Table 2: Performance comparison of TAL+ and TAL methods across three rounds for ViT-tiny and ViT-small models on Cifar100.**
| Method | Round 1 | Round 2 | Round 3 |
|--------|---------|---------|---------|
| **ViT-tiny** |
| TAL+ | 57.90 | 58.78 | 58.90 |
| TAL | 57.71 | 57.98 | 59.80 |
| **ViT-small** |
| TAL+ | 60.51 | 60.24 | 60.77 |
| TAL | 61.40 | 61.63 | 60.49 |
To assess statistical significance, we analyzed results from multiple rounds under different random seeds. We tested whether models initialized with TAL and TAL+ would show significant differences in post-training performance, computing p-values for each scenario.
**Table 3: Statistical significance (p-values) comparing TAL+ and TAL methods on CIFAR-100 and SVHN.**
| Model | CIFAR-100 | SVHN |
|-----------|-----------|---------|
| ViT-tiny | 0.9674 | 0.3846 |
| ViT-small | 0.3103 | 0.6551 |
In statistical analysis, a p-value greater than 0.05 typically indicates that we cannot reject the null hypothesis. The table above shows that the p-values in all cases are substantially greater than 0.05. Based on these paired t-test results, we conclude that there is no statistically significant difference in performance between the TAL+ and TAL methods after training, suggesting that both initialization approaches ultimately produce comparable results.
# Parameter Characteristics Analysis
Although there is no significant difference in post-training performance between TAL and TAL+ initialization methods, models initialized with TAL+ perform significantly better than those initialized with TAL when evaluated in their untrained state. To further investigate this phenomenon, we analyzed the parameters of models initialized by both methods before and after training on the CIFAR100 dataset.
We examined the following metrics[1], [2]:
- **Parameter sparsity:** the proportion of parameters close to zero in the model
- **Parameter diversity:** cosine distance between models initialized with different random seeds
- **Parameter change magnitude:** the relative degree of change in parameters before and after training
The analysis results are presented below:
**Table 4: Parameter characteristics comparison between TAL and TAL+ methods for ViT-Small and ViT-Tiny models on CIFAR100.**
| Metrics | ViT-Small | | ViT-Tiny | |
|---------|-----------|--------|----------|--------|
| | TAL | TAL+ | TAL | TAL+ |
| Initial Parameter Sparsity | 79.97% | 77.99% | 78.89% | 76.00% |
| Post-training Parameter Sparsity | 79.48% | 77.50% | 78.10% | 75.17% |
| Parameter Change Magnitude | 1.69 | 2.73 | 2.45 | 2.64 |
| Post-training Parameter Diversity | 0.0485 | 0.0896 | 0.0424 | 0.0789 |
Despite TAL+ demonstrating superior performance in the untrained state, our experimental results indicate that both methods yield comparable performance after training. This phenomenon may be attributed to TAL+ converging too early to a specific region of the solution space, thereby limiting exploration of potentially better solutions. In contrast, the higher stochasticity of TAL enables the model to explore a broader range of the parameter space. Additionally, the higher parameter sparsity observed in TAL may provide an implicit regularization effect that helps the model achieve cleaner and better-generalized solutions, which explains why its final performance matches or occasionally exceeds that of TAL+.
[1]:Knyazev, Boris, et al. "Parameter prediction for unseen deep architectures." Advances in Neural Information Processing Systems 34 (2021): 29433-29448.
[2]:Knyazev, Boris, Doha Hwang, and Simon Lacoste-Julien. "Can we scale transformers to predict parameters of diverse imagenet models?." International Conference on Machine Learning. PMLR, 2023. | Summary: This paper addresses the high computational and storage overheads involved in training large pretrained models by focusing on effective parameter initialization. Building on recent advances in Graph HyperNetworks (GHN) and the Learngene framework, the authors propose a novel method called Task-Aware Learngene (TAL). TAL is designed to capture shareable, task-specific knowledge from a well-trained “ancestry” model and use it to predict initial parameters for models of varying scales.
Claims And Evidence: Yes
Methods And Evaluation Criteria: The proposed TAL method leverages an encoder–decoder architecture where the encoder (learngene) extracts and transfers task-specific knowledge from an ancestry model, and a task hypernetwork generates task-specific bias parameters for a task-specific layer.
The method is evaluated on both vision and language tasks using standard model architecture datasets (e.g., ViTs-1K, ViTs+-1K, GPTS-1K) and downstream benchmarks (ImageNet-1K, Decathlon, MRPC, COLA, RTE, IMDB).
Theoretical Claims: N/A
Experimental Designs Or Analyses: Experiments are conducted on multiple vision and language tasks with a variety of model scales (e.g., ViT-Tiny, ViT-Small, ViT-Base, and GPT-2 models with different depths).
However, an important Task-Aware Parameter initialization baseline is not compared: Learning to Generate Parameters of ConvNets for Unseen Image Data (TIP 2024)
Supplementary Material: NA
Relation To Broader Scientific Literature: AL builds on prior work in hypernetworks and Task-Aware Parameter initialization methods.
Essential References Not Discussed: An important Task-Aware Parameter initialization works ins not discussed: Learning to Generate Parameters of ConvNets for Unseen Image Data (TIP 2024)
Other Strengths And Weaknesses: The proposed method is sound and interesting.
However, there are several weaknesses:
An important and recent Task-Aware Parameter initialization works [1] is not compared.
There is limited discussion on the scalability of TAL when applied to extremely large models, which could be crucial for real-world applications.
The models used in this paper is too small and shallow. Deeper network like ResNet-34 is encourgaed to be used for testing, as in previous works [1].
The theoretical underpinning of the method, including convergence properties and optimality guarantees, is not fully explored.
[1] Learning to Generate Parameters of ConvNets for Unseen Image Data (TIP 2024)
Other Comments Or Suggestions: N.A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank you for your reviews and address your concerns as follows.
### Q1
An important Task-Aware Parameter initialization baseline is not compared: Learning...(TIP 2024). The models used in this paper is too small and shallow.
### A1
Thank you for pointing this out. We will cite it in the relevant section. As for the model size predicted by TAL+, our framework has been tested on 12-layer ViT-Base models with 86M parameters, far exceeding ResNet-34’s 22M. It also handles deeper architectures, including the 16-layer ViT-Base (115M), which achieves 31.52% accuracy on ImageNet-1K without further training. Scaling up to an 18-layer ViT-Base (129M), the model maintains a competitive 22.25% accuracy. These experiments validate TAL’s effectiveness on larger models.
### Q2
There is limited discussion on the scalability of TAL.
### A2
Our experiments cover various model sizes, including ViT-Base (12 layers). Indeed, TAL is designed for scalability and can support larger models. We are currently testing it on large language models (e.g., LLaMA) and will present the results in future work.
### Q3
The theoretical underpinning of the method is not fully explored.
### A3
We discuss a simplified case of our TAL method and provide a theoretical derivation. We reach the following conclusions:
- **Convergence:** Gradient-based optimization of hypernetworks converges to stationary points under standard smoothness assumptions.
- **Approximation Capability:** Sufficiently expressive hypernetworks can approximate optimal model parameters to arbitrary precision.
**1. Problem Definition and Optimization Objective**
We define the following setup:
- **Hypernetwork** $H: \Theta \rightarrow \mathbb{R}^d$ A multilayer perceptron (MLP) that maps from parameter space $\Theta$ to model parameter space $\mathbb{R}^d$, generating parameters $p = H(\theta)$.
- **Model** $M$ Also an MLP, using parameters $p$ to perform a binary (0,1) classification task and compute the loss $\mathcal{L}(p)$.
The optimization objective is to train the hypernetwork $H$ to minimize the cross-entropy loss:
$$\min_{\theta} \mathcal{L}(H(\theta)) = \mathbb{E}_{(x,y) \sim \mathcal{D}} \left[ -y\log(f_M(x; H(\theta))) - (1-y)\log(1-f_M(x; H(\theta))) \right]$$
**2. Convergence Analysis**
**Theorem 1 (Convergence to Stationary Point):** Assume the following conditions hold:
- The loss function $\mathcal{L}(p)$ is $\beta$-smooth
- Hypernetwork $H(\theta)$ is $L_H$-Lipschitz continuous
- The composed function $\mathcal{L}(H(\theta))$ has bounded gradients
Then, using gradient descent with learning rate $\eta < \frac{2}{L_H\beta}$, after $T$ iterations:
$$\min_{t=0,1,...,T-1} \||\nabla_{\theta} \mathcal{L}(H(\theta_t))\||^2 \leq \frac{2(\mathcal{L}(H(\theta_0)) - \mathcal{L}(H(\theta^*)))}{T\eta}$$
**Proof:**
By $\beta$-smoothness of $\mathcal{L}$ and $L_H$-Lipschitz continuity of $H$, the composite function $\mathcal{L}(H(\theta))$ is $(L_H\beta)$-smooth. For a $(L_H\beta)$-smooth function, when using gradient descent with learning rate $\eta < \frac{2}{L_H\beta}$:
$$\mathcal{L}(H(\theta_t)) - \mathcal{L}(H(\theta_{t+1})) \geq \eta\left(1 - \frac{L_H\beta\eta}{2}\right)\||\nabla_{\theta}\mathcal{L}(H(\theta_t))\||^2$$
Summing over $t=0,1,...,T-1$ and rearranging:
$$\sum_{t=0}^{T-1}\||\nabla_{\theta}\mathcal{L}(H(\theta_t))\||^2 \leq \frac{\mathcal{L}(H(\theta_0)) - \mathcal{L}(H(\theta_T))}{\eta\left(1 - \frac{L_H\beta\eta}{2}\right)}$$
$$\leq \frac{\mathcal{L}(H(\theta_0)) - \mathcal{L}(H(\theta^*))}{\eta\left(1 - \frac{L_H\beta\eta}{2}\right)}$$
Since $\eta < \frac{2}{L_H\beta}$ implies $1 - \frac{L_H\beta\eta}{2} > 0$, and using the minimum gradient norm:
$$T \cdot \min_{t=0,1,...,T-1} \||\nabla_{\theta}\mathcal{L}(H(\theta_t))\||^2 \leq \sum_{t=0}^{T-1}\||\nabla_{\theta}\mathcal{L}(H(\theta_t))\||^2$$
$$\leq \frac{\mathcal{L}(H(\theta_0)) - \mathcal{L}(H(\theta^*))}{\eta\left(1 - \frac{L_H\beta\eta}{2}\right)}$$
With proper learning rate, $1 - \frac{L_H\beta\eta}{2} \geq \frac{1}{2}$, resulting in:
$$\min_{t=0,1,...,T-1} \||\nabla_{\theta} \mathcal{L}(H(\theta_t))\||^2 \leq \frac{2(\mathcal{L}(H(\theta_0)) - \mathcal{L}(H(\theta^*)))}{T\eta}$$
This shows that as $T \to \infty$, the gradient norm approaches zero, indicating convergence to a stationary point. $\square$
**3. Optimality Analysis**
**Theorem 2 (Universal Approximation):** If the hypernetwork $H$ is a sufficiently wide and deep MLP, then for any $\delta > 0$ and any target parameter $p^* \in \mathbb{R}^d$, there exists a parameter $\theta$ such that:
$$\||H(\theta) - p^*\|| < \delta$$
**Proof:**
According to the universal approximation theorem, a sufficiently wide MLP can approximate any continuous function on a compact domain to arbitrary precision. Treating the mapping from a fixed input to the target parameter vector $p^*$ as a constant function, there exists an MLP architecture for $H$ and parameters $\theta$ such that $\||H(\theta) - p^*\|| < \delta$ for any $\delta > 0$. $\square$
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. My concerns have been addressed. | Summary: Authors propose TAL, an encoder-decoder method to generate parameters for initializing models of various sizes given a model architecture and a task embedding.
Claims And Evidence: 1. Yes
Methods And Evaluation Criteria: 1. The method relies on a single ancestry model, which might be problematic as the latest model might serve as a better ancestry model, and then requires retraining. And since the entire framework requires a certain level of "pretraining" before it can serve as an initialization generator, this inherently brings a conflict. It is hard to achieve "train once and use forever". Maybe authors can show that when serving as initialization, the choice of ancestry model isn't that important.
2. Authors seem to include ViTs of various depths in their experiments as a demonstration for need of models of different sizes. What would be a potential use case for models with various depths? (i.e. wider models or deeper models) As far as I am concerned, a certain depth-to-width ratio is usually adopted and people rarely change that during application.
Theoretical Claims: n/a
Experimental Designs Or Analyses: 1. Experiments seem to be reasonable and demonstrating TAL's effectiveness over GHN-based method.
Supplementary Material: n/a
Relation To Broader Scientific Literature: n/a
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: n/a
Other Comments Or Suggestions: n/a
Questions For Authors: n/a
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank you for your reviews and address your concerns as follows.
### Q1
The method relies on a single ancestry model, which might be problematic as the latest model might serve as a better ancestry model, and then requires retraining. And since the entire framework requires a certain level of "pre-training" before it can serve as an initialization generator, this inherently brings a conflict. It is hard to achieve "train once and use forever". Maybe authors can show that when serving as initialization, the choice of ancestry model isn't that important.
### A1
Our approach allows flexibility in choosing the ancestry model based on the TAL training datasets, including the consideration of using the most recent model. It should be clarified that our approach does not involve any retraining of the ancestry model, instead we retrain the TAL encoder-decoder framework, not the ancestry model itself.
### Q2
Authors seem to include ViTs of various depths in their experiments as a demonstration for need of models of different sizes. What would be a potential use case for models with various depths? (i.e. wider models or deeper models) As far as I am concerned, a certain depth-to-width ratio is usually adopted and people rarely change that during application.
### A2
Our approach employs a structured scaling strategy rather than a fixed depth-to-width ratio. We sample layers from 3-12 and hidden dimensions with a step size of 32, ranging from 192-768 across all model depths. Attention heads are selected to ensure divisibility with hidden dimensions, maintaining architectural integrity. This parameterization offers greater freedom in depth and width combinations while still preserving architectural coherence, enabling the generation of models with diverse parameter counts tailored to various computational budgets. This flexibility addresses deployment needs across resource-constrained devices to high-performance environments. | Summary: This paper presents Task-Aware Learngene (TAL), designed to initialize large models via parameter prediction. To accomplish this, the authors first employ an encoder-decoder architecture for the TAL model and train it under the supervision of an ancestry model to facilitate knowledge transfer. Subsequently, with the aim of improving the multi-task generalization ability of downstream models, the authors fine-tune the trained TAL model on a multi-task dataset, thereby acquiring multi-task knowledge from the ancestry model. Comparative evaluations against alternative methods (such as GHN-3 and LoGAH) underscore the efficacy of TAL across various scenarios.
Claims And Evidence: In the right column of lines 36–40, the authors state that “…its effectiveness diminishes when initializing larger models like ViT-base,” a claim supported by Table 1. However, as indicated in the same table, the proposed TAL method also fails to initialize the ViT-base model on untrained configurations, achieving a very low-performance value of 0.1, identical to GHN-3 and LoGAH. While the TAL+ model outperforms other methods, its improvement appears to stem from training on an enhanced dataset (lines 236–244). My concern centers on whether all methods in Table 1 were trained on the same dataset. If so, why does only the TAL+ method perform well, while the performance of other methods is nearly random? If not, this comparison is problematic, as it evaluates methods trained on different datasets, thereby undermining the validity of the claim.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: The paper does not contain any formal proofs or theorems.
Experimental Designs Or Analyses: I have reviewed the experimental design and analyses outlined in Section 5, and the following concerns arise: \
1. Lack of Supporting Evidence for Convergence Claims: The authors assert that “Models initialized with TAL/TAL+ converge faster and outperform …” (line 327). However, this claim is not substantiated by detailed experimental results, figures, or tables. While higher accuracy values are presented, no evidence is provided to demonstrate faster convergence, which is a critical aspect of the claim. \
2. Limited Validation of Untrained Initialization Effectiveness: The effectiveness of TAL/TAL+ in initializing models without training has only been validated on the Decathlon tasks. This narrow scope raises questions about the method’s broader applicability, as it has not been tested on other NLP tasks or diverse domains. Such validation is essential to establish the generalizability of the approach. \
3. Incomplete Comparison with Task Information: Task information appears to be a feature that could also benefit other methods, such as GHN-3. However, the experiments in Table 8 focus solely on the proposed TAL method without comparing it to other methods that incorporate task information. This omission limits the ability to assess whether the observed improvements are unique to TAL or could be achieved by enhancing existing methods with similar information.
Supplementary Material: I reviewed the entire supplementary material and have no concerns regarding its content.
Relation To Broader Scientific Literature: Task-Aware Learngene (TAL) builds upon and extends existing research in model initialization, knowledge distillation, and multi-task learning. Several prior works, such as GHN-3 and LoGAH, have explored ways to generate or predict parameters for a great diversity of models. TAL differs with these earlier studies by leveraging an encoder-decoder structure and an ancestry model for knowledge transfer, as well as its explicit use of multi-task fine-tuning to enhance downstream generalization. By effectively integrating knowledge from an ancestry model and refining it through multi-task datasets, TAL demonstrates that parameter prediction for large models can achieve strong performance across diverse situations.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths: \
1. The proposed TAL effectively integrates multi-task knowledge from an ancestry model via an encoder-decoder structure and multi-task fine-tuning, facilitating robust parameter prediction for diverse downstream tasks. \
2. Empirical comparisons with methods like GHN-3 and LoGAH suggest TAL’s consistent performance improvements and highlight its potential for broader applications.
Weaknesses: \
In addition to the experimental issues noted above, I have a few further concerns: \
1. Insufficient Motivation for the Ancestry Model: The rationale for introducing the ancestry model has not been fully articulated. A more detailed justification of its role and necessity would strengthen the paper’s foundation. \
2. Ambiguity in the Concept of “Learngene”: The term “Learngene” is somewhat confusing in this context. In prior literature, it appears to refer to “critical components from the ancestry model used to initialize models of various sizes.” However, in this paper, it seems to function more as a “critical component for a parameter prediction model, which is learned through multi-task tuning and conditioned on task embeddings.” This shift in interpretation raises questions about whether the observed performance improvements are truly attributable to the “Learngene” concept or are instead driven by the multi-task tuning process and the incorporation of task embeddings.
Other Comments Or Suggestions: None.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank you for your reviews and address your concerns as follows.
### Q1
Were all methods trained on the same dataset? If so, why does only TAL+ perform well? If not, the comparison is unfair, undermining the claim's validity.
### A1
TAL and GHN-3, LoGAH do use the same model training dataset, ViTs-1K, which ensures a fair comparison between them.
The results in Table 1 and Table 2 of the paper clearly demonstrate the significant advantage of TAL under this fair comparison, as TAL outperforms the comparison method GHN-3/LoGAH on almost all datasets for the task of image classification.
Beyond this, TAL+ introduces the augmented dataset ViTs+-1K to further enhance large model initialization, making the use of an augmented dataset one of our key contributions.
### Q2
Lack of Supporting Evidence for Convergence Claims.
### A2
We use 12-layer ViT-Small initialized with five different methods and train it on Decathlon datasets. We plot the training loss versus epochs to observe convergence speed. The table below presents results for the UCF dataset, where we observe that ViT models initialized with TAL/TAL+ methods demonstrate significantly faster convergence. Similar trends are observed across other datasets. The corresponding convergence plots for all datasets will be included in the appendix, as images cannot be displayed in the response.
|**Epoch**|**10**|**20**|**30**|**40**|**50**|**60**|**70**|**80**|**90**|**100**|
|---|---|---|---|---|---|---|---|---|---|---|
|RandInit|59.9801|52.7643|45.5558|33.7087|17.9540|6.8132|2.9012|1.9813|1.6555|1.0367|
|GHN-3|123.9892|114.4703|106.5648|96.6006|87.0042|75.6672|60.6773|47.0442|33.3951|23.7440|
|LoGAH|116.4734|109.3229|101.4643|94.3091|86.6136|76.6659|65.2919|50.5523|37.1268|23.7927|
|TAL|8.4604|7.0408|3.1283|2.2561|1.8384|1.5562|1.0961|0.8141|0.9439|0.7467|
|TAL+|2.0044|2.7601|0.9048|0.8046|0.7405|0.5503|0.6014|0.3757|0.4805|0.4478|
**Table:** Training Loss Comparison of ViT-small with Different Initialization Methods on UCF Dataset Across Epochs
### Q3
Limited Validation of Untrained Initialization Effectiveness.
### A3
We demonstrate TAL's performance on unseen tasks in Table 5 of Section 5, encompassing tasks across diverse domains including Fashion MNIST (clothing and fashion items), FER2013 (facial expression recognition), and HAM10000 (dermatological skin lesion classification). The results indicate that models initialized with TAL consistently outperform those using random initialization and the LoGAH method.
### Q4
Incomplete Comparison with Task Information.
### A4
We augment the GHN-3 method with the same task information utilized in our TAL approach to evaluate its effectiveness. We initialize 12-layer ViT-Tiny and 12-layer ViT-Small using both standard GHN-3 and our enhanced GHN-3 with task information (GHN-3 w/ T.I.), then assessed initialization quality on the Decathlon datasets. Results demonstrate that incorporating task information also improve GHN-3's performance.
| Model | Method | Airc. | C100 | DPed | DTD | GSTR | OGle | SVHN | UCF | Flwr | Avg\_Acc |
|------------|--------------|------|------|------|------|------|------|------|------|------|--------|
| **12-Tiny** | GHN-3 | 3.24 | 35.19 | 87.72 | 6.86 | 89.12 | 0.06 | 10.00 | 3.43 | 10.88 | 27.39 |
| | GHN-3 w/ T.I. | 5.31 | 22.03 | 84.66 | 11.49 | 94.01 | 0.06 | 10.00 | 1.28 | 21.27 | 27.79 **(+0.40)** |
| **12-Small** | GHN-3 | 2.64 | 5.55 | 84.30 | 7.23 | 84.39 | 0.06 | 10.00 | 1.74 | 9.51 | 22.82 |
| | GHN-3 w/ T.I. | 6.60 | 21.84 | 82.70 | 14.10 | 91.52 | 0.06 | 10.00 | 1.23 | 32.25 | 28.92 **(+6.10)** |
**Table:** Performance of untrained models initialized with GHN-3 and GHN-3 with Task Initialization (GHN-3 w/ T.I.)
### Q5
Insufficient Motivation for the Ancestry Model.
### A5
The role of the ancestry model is clearly articulated in Section 4. It provides both the soft labels necessary for Stage 1 training and task-specific labeling information for multitask scenarios. The ablation experiments in Table 8 of Section 5 quantitatively demonstrate the ancestry model's significant contribution to performance, confirming its critical importance in enhancing parameter initialization quality.
### Q6
Ambiguity in "Learngene" Concept.
### A6
The essence of the Learngene framework lies in condensing critical knowledge from an ancestry model to initialize downstream models. Our implementation employs an encoder-decoder architecture, departing from previous Learngene methods that relied on manual parameter extraction and heuristic designs. Despite innovations in implementation techniques, our approach maintains the basic paradigm of Learngene while providing greater flexibility and adaptability. In our experimental analysis, we conduct a detailed ablation study on the ancestry model, clearly demonstrating its significant contribution to performance improvement. | null | null | null | null | null | null |
Self-Supervised Transformers as Iterative Solution Improvers for Constraint Satisfaction | Accept (poster) | Summary: This paper mainly focuses on constraint satisfaction problems (CSPs). It proposes a transformer-based model to serve as an iterative solution improver, repeatedly revising the generated CSP solution until it is correct.
- The main _idea_ is to leverage the self-supervised learning paradigm, that is, to train the model without using any labeled or pre-processed solution.
- The main *contribution* is that this work adopts a self-supervised manner, which can avoid the pre-given solution for learning.
Claims And Evidence: There are a few claims that I think may not be supported in this paper:
- 1. The author claim that their paradigm, compared with RL-based methods, is more practical. But there is no discussion about RL-based methods both in theoretical analysis and experimental comparison.
- 2. The author compared their method with other SOTAs in table 2, however, it seems to simply copy the results heavily from Du et al. (2024). This is not fair since the results are not based on the same base model. To my best knowledge, Du et al. (2024) do this experiment based on ResNet.
Methods And Evaluation Criteria: The proposed model makes sense.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes, I have checked the experiments. The main concern of this experiment is the unfair comparison between this work and others, especially copied from Du et al. (2024). Those methods are based on a ResNet structure, while this is a transformer-based model. Also, since the authors claim their method can avoid disadvantages of RL-based method, they should discuss more about by experiments.
Supplementary Material: Yes, I have checked the code, which is runable with a little debugging. The provided code is not prepared, but one can easily fix the issues.
Relation To Broader Scientific Literature: The key contributions *may* aid the development of large-scale CSP solving; however, the author did not discuss this perspective in the current version.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: PROS:
- This paper is overall well written and easy to follow.
- The proposed method makes sense, and I have run the code to validate the overall effectiveness.
CONS:
- The comparison of experiments is somehow unfair and too simple.
Other Comments Or Suggestions: Overall, this paper presents an interesting approach to CSP solving and offers an effective method. However, it is not yet ready for publication due to unsupported claims and unfair comparisons. Additionally, there needs to be more discussion and comparison regarding RL-based methods.
One suggestion I have is that if the unsupported claims can be substantiated, either theoretically or practically, the paper would be significantly improved. If these concerns are adequately addressed, I would be inclined to raise my score.
## After Reviewing the Rebuttal
I appreciate the author's response; however, I find the comparison based on different architectures (backbones) to be flawed. Additionally, I disagree with the following statement:
> "We agree that a broader study—e.g., isolating the effect of ResNet vs. GNN vs. SATNet vs. Transformer under various learning paradigms—would be highly valuable, and we share your interest in such work. However, we view that as a larger, survey-level endeavor rather than the goal of the current contribution."
If the comparison is deemed unfair, it should not be included as a method of evaluation in the scope of this paper. I believe that comparing these works using the same architecture (in this case, the Transformer) would be more beneficial and equitable.
Furthermore, the authors mention a theoretical discussion of reinforcement learning (RL) methods; however, this discussion lacks formal analysis and seems to lean more towards narrative rather than substantive content, which appears to be an overstatement in their rebuttal.
Considering these issues, I believe a weak reject is an appropriate score, and I think this paper has the potential for improvement.
Questions For Authors: Q1: If the iterative improvement method is effective, can this optimization approach also be applied to directly solve Constraint Satisfaction Problems (CSP) using local search? Furthermore, can iterative improvement be achieved by addressing an optimization problem with Stochastic Gradient Descent (SGD)?
Q2: Why do reinforcement learning (RL) methods not fit into this scenario? I'm confused because the reward function or signal in this setting seems much easier to define. For instance, we could simply set the reward as the difference in the number of satisfied constraints.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We appreciate your thoughtful comments and are glad that you see the potential for our method in aiding the development of large-scale CSP solving.
>no discussion about RL-based methods both in theoretical analysis and experimental comparison.
We would like to clarify that both theoretical and experimental comparisons with RL-based approaches are included in the paper.
- Theoretical considerations are discussed in Sections 2.2 and 3.3.
- Experimentally, we directly compare against ANYCSP, a state-of-the-art RL-guided neural solver for CSPs.
We will revise the paper to make these points more prominent.
> The reward signal in this setting seems much easier to define. For instance, we could simply set the reward as the difference in the number of satisfied constraints.
Defining a reward as the number of satisfied constraints is indeed an intuitive design. However, prior work indicates that such rewards are not sufficient to guide learning effectively.
ANYCSP [1] experimented with the exact reward you have suggested and found that “models trained with this reward tend to get stuck in local maxima”. They comment that “intuitively, this simple reward scheme immediately punishes the policy for stepping out of a local maximum causing stagnating behavior”.
Their method implements a much sparser reward to avoid this issue by setting the reward to 0 in any step in which the new assignment is not an improvement over the previous best.
In contrast, our approach provides a dense training signal through differentiable penalty functions. These loss functions guide the model even when no constraint is fully satisfied.
[1] Tönshoff, Jan, et al. "One model, any CSP: graph neural networks as fast global search heuristics for constraint satisfaction." (2022).
>Comparison to other methods is not fair. Du et al. (2024) do this experiment based on ResNet.
We note that Du et al. themselves compare against models like SATNet and RRN which are not ResNet based models. SATNet is a model that learns the SDP relaxation of a SAT formulation, while RRN is a graph-based model.
While it’s true that Du et al.’s method uses a ResNet base model, We follow standard practice and make comparisons between full frameworks for solving CSPs.
> can this optimization approach also be applied to directly solve Constraint Satisfaction Problems (CSP) using local search?
Yes! And indeed the existing literature of local search was a prominent inspiration for us, specifically, our approach is grounded in principles from the constraint-based local search literature [2]. In this approach, constraints are associated with “violation degrees” that measure how far a solution is from satisfying the constraint (violation = 0 ⇔ satisfied). We adopt this principle and extend it to the neural domain by designing differentiable approximations of these violations that work with probabilistic assignments.
[2] Michel, L., Hentenryck, P.V. (2018). Constraint-Based Local Search. In: Martí, R., Pardalos, P., Resende, M. (eds) Handbook of Heuristics.
>Furthermore, can iterative improvement be achieved by addressing an optimization problem with Stochastic Gradient Descent (SGD)?
This is a great insight. Indeed if we ignore the transformer architecture and perform SGD on a set of variable assignments guided by our self-supervised loss, the variable assignments can be updated to become a (relaxed) satisfying solution. However, we found that in practice this method often leads to local optima, furthermore, we see a drastic decrease in performance as the SGD focus on a single instance and does not know how to generalize to other instances. Our framework essentially learns this update using a single-step transformer and is able to generalize by training on a larger set of instances.
To showcase this, we run a simple SGD on a Sudoku board where the missing values are randomly initialized and update it with SGD guided by the loss function for 10000 iterations. We observe the following results averaged over 10 runs:
|#Missing Values|19|33|41|47|
|-|-|-|-|-|
|#AllDifferent Satisfied (Max 27)|26.8|25.8|24.5|21.8|
We see as the problem becomes harder with more missing cells, the number of satisfied alldifferent constraints became lower. We will include this discussion in the revision of the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for your efforts.
After reviewing your response, I have some concerns:
- The authors state that "theoretical considerations are discussed in Sections 2.2 and 3.3." However, I did not find any theoretical considerations in these sections. Section 2.2 focuses on related work, while Section 3.3 describes the loss function.
- My main concern is that the authors suggest they are using different backbones for various comparison methods. While I acknowledge that there may be mistakes in previous work, I encourage this study to address and rectify this issue.
---
Reply to Comment 1.1.1:
Comment: Thank you for your continued engagement with our work. We appreciate the opportunity to clarify your remaining concerns.
> Theoretical considerations not found. Section 2.2 focuses on related work, while Section 3.3 describes the loss function.
Apologies for the confusion, Section 2.2 introduces RL approaches from the literature, and Section 3.3 discusses the theoretical disadvantages of training with RL compared to training with self-supervision. We appreciate the opportunity to elaborate further and make these theoretical motivations more explicit in the paper.
Our key point is the following: while RL is appropriate for sequential decision-making with a **black-box reward function** provided by an environment, the reward function for a CSP is not a black box. Specifically, if we view the degree of constraint satisfaction as the reward, then this is a “white box” for which we know the analytical form (i.e., the expression representing a constraint). What remains is to make the degree of constraint satisfaction (or the negative of the degree of constraint violation) differentiable w.r.t. an assignment of values to variables. This enables end-to-end gradient descent from an input (initial assignment) to an output (improved assignment) to a loss function (degree of constraint violation).
Any RL-based method for CSP must squash the extremely rich signal of constraint violation into a single scalar reward, aggregated over the multiple constraints. Gradients do not flow directly from this reward to the assignment. Any performant RL policy must “learn” the relationship between the action (i.e., an assignment) and the reward (i.e., degree of constraint satisfaction) in order to maximize the reward. This is unnecessarily complicated for a fully observed CSP.
For example, as discussed previously, ANYCSP [1] implements a very sparse reward function, setting the reward to zero unless a strictly better assignment satisfying more constraints is found. In contrast, our self-supervised formulation introduces continuous penalty functions that provide dense, gradient-based supervision, even for partially incorrect assignments.
Practically, this translates to **significantly faster training**. While ANYCSP reports 24–48 hours for full training, our model achieves strong results in under 6 hours, and fully converges within 10 hours.
[1] Tönshoff, Jan, et al. "One model, any CSP: graph neural networks as fast global search heuristics for constraint satisfaction." (2022).
> Backbones differ across methods; this needs to be addressed or rectified.
We appreciate this concern and agree that comparing different architectures under different paradigms can be problematic if the goal is to isolate performance of a specific training paradigm. However, that is not the goal of our paper.
The goal of our paper is to introduce a new heuristic solving framework: a Transformer architecture trained via self-supervised learning, designed specifically for CSPs. Our comparisons are therefore made against other end-to-end neural/traditional CSP solvers, each using their own architectural and learning choices. This is in line with the evaluation practices of all related work.
We are not using different backbones while comparing different methods, but comparing our method (Transformer + self-supervised learning) against other complete solver pipelines.
That said, our experiments showed significantly better performance than Yang et al. [2], who use a Transformer backbone similar to ours—making this a particularly direct and meaningful comparison.
We agree that a broader study—e.g., isolating the effect of ResNet vs. GNN vs. SATNet vs. Transformer under various learning paradigms—would be highly valuable, and we share your interest in such work. However, we view that as a larger, survey-level endeavor, rather than the goal of the current contribution.
[2] Yang, et. al. "Learning to Solve Constraint Satisfaction Problems with Recurrent Transformer." The Eleventh International Conference on Learning Representations. | Summary: This paper presents a Transformer-based learning framework for solving CSPs. Specifically, they leverage a transformer architecture to refine the solution, where they show that decision variable position encoding is key for transformer learning. They adopt a continuous approximation as loss function, and show on three CSPs (Sudoku, Graph Coloring, Nurse Scheduling) that their proposed method outperform the baselines.
Claims And Evidence: The claims made in the submission generally are supported by clear and convincing evidence, although I have some questions / concerns regarding the evaluation. See my answer to “Experimental Designs or Analyses”.
Methods And Evaluation Criteria: The proposed methods and evaluation datasets seem to be reasonable, except for my concerns in “Experimental Designs or Analyses”.
Theoretical Claims: There’s no theoretical claims nor proofs in this paper.
Experimental Designs Or Analyses: I see the following issues regarding experimental design:
- The authors set a time limit of 10s for solving graph coloring and nurse scheduling. I find the time limit to be too short (e.g. maybe this reflect that the CSPs the authors evaluation on are too simple to solve). Can the authors benchmark on more complicated problems? Also, can the authors increase the time limit for solving the instances? I think it makes sense for constraint programming solver CP-SAT to take more time than learning-based methods, so the authors should consider allowing CP-SAT to solve for longer.
- Can the authors compare with more heuristic baselines (e.g. local search, genetic algorithm)? I find the comparison with heuristic baselines somewhat lacking.
- Maybe I missed this, but I don’t see nurse scheduling results reported in the paper?
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: This paper contributes an improved learning methods for solving constraint programming problems. I find the discussions / experiments regarding position encoding interesting.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: I’m concerned that the paper may lack novelty: for example, encodeing positional information for iterative solution improvement has been explored in learning for combinatorial optimization literature before, and the Gumble-softmax differentiable loss isn’t new either.
Other Comments Or Suggestions: Sec 2.2. “Yang et al. proposed a recurrent Transformer architecture” → You missed the year citation for Yang et al.
Questions For Authors: 1. Sec 3.3 Eq (5): “hyperparameters lambda_i is the weight assigned to p_i”. How did you decide the lambda_i? Would an adaptive way to select different lambda_i l for different constraints lead to better performance? (potentially with learning, or some heuristic methods)
2. Instead of the differentiable loss with GumbelSoftmax, I wonder if the authors have tried reinforcement learning? The benchmark tasks seem to be relatively standard to design RL rewards, so I feel like an ablation study of differentiable loss v.s. RL would further strengthen the paper.
3. Sec 3.4 Iterative Test-Time Deployment: instead of feeding the entire solution back for refinement, have the authors consider only improving a subpart of the solution and fixing the rest (e.g. incorporate a local search procedure with learning)? How well would that work?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for the thoughtful feedback and questions. We’ve conducted new experiments and clarified key aspects of the method in response to your concerns, which we believe strengthen the submission significantly.
> Can the authors benchmark on more complicated problems?
We have added experiments on the MAXCUT problem. Please refer to our response to reviewer tcHx for details.
> CP-SAT takes more time than learning-based methods, so the authors should consider allowing CP-SAT to solve for longer.
Efficiency is indeed a major advantage of our method and in many applications it is necessary to solve for a good feasible solution very quickly.
That said, we conducted new experiments where CP-SAT was allowed to run for 30 and 60 seconds on the Graph-Coloring task with 10 colors (where ConsFormer had previously outperformed it under 10s). Results are shown below:
||n=100|n=200|
|-|-|-|
|OR-Tools(10s)|52.41|10.25|
|OR-Tools(30s)|53.58|11.16|
|OR-Tools(60s)|53.67|11.66|
|ConsFormer(10s)|52.60|11.92|
We see that CP-SAT can outperform our method on small instances (nodes=100) with extended time, but it still underperforms on larger instances (nodes=200), even with 6x more time. Furthermore, in the new MAXCUT problem, 20 parallel runs–each with 180s limit–were used to compute the results. ConsFormer also outperforms OR-Tools by a significant margin.
> compare with more heuristic baselines
We include 3 additional heuristic baselines for graph coloring:
- Greedy Coloring
- Feasibility Jump
- Random Search
The first is the greedy coloring algorithm implemented by networkx and the other two are local search approaches implemented by OR-Tools. We note that the LS algorithms are already present in the default OR-Tools used for the paper.
|Color count = 10|n=100|n=200|
|-|-|-|
|Greedy|0.75|0.00|
|FJ(10s)|35.66|6.0|
|RS(10s)|49.75|9.08|
|OR-Tools(10s)|52.41|10.25|
|ANYCSP(10s)|0|0|
|ConsFormer(10s)|52.60|11.92|
|Color count = 5|n=50|n=100|
|-|-|-|
|Greedy|32.42|0.00|
|FJ(10s)|82.83|54.5|
|RS(10s)|83.08|56.91|
|OR-Tools(10s)|83.08|57.16|
|ANYCSP(10s)|79.17|34.83|
|ConsFormer(10s)|78.16|42.50|
We observe that while the local-search based heuristics were able to perform well on the smaller instances, their performance significantly worsens on the larger instances with 10 colors.
> nurse scheduling results
They were reported in-text in Section 4.3. We will revise the paper for better clarity.
> I’m concerned that the paper may lack novelty.
As far as we know, we are the first to successfully apply a Transformer architecture to solve general CSPs in a self-supervised setting. Existing work often requires graph representations of the problems while we treat them as sequences.
> How did you decide the lambda_i? Would an adaptive way to select different lambda_i l for different constraints lead to better performance?
This is a great suggestion and an adaptive lambda is indeed an interesting direction that we plan on exploring for future work. For now, each problem has a relatively simple mixture of constraints, so the weights are manually set.
>Instead of the differentiable loss with GumbelSoftmax, I wonder if the authors have tried reinforcement learning?
We apologize if we have misunderstood, but we believe the reviewer may be referring to use of the REINFORCE trick (or score-function based gradient estimators) from RL that can be generally used to compute gradients of discrete variables. However, we note that the Gumbel-Softmax paper[1] discusses that this gradient estimator has high variance (Section 3.2 of [1]) and performs very poorly (Section 4.1 of [1]) in practice -- in fact it is the worst performing of all discrete gradient estimators they compare.
[1] Jang et al. "Categorical Reparameterization with Gumbel-Softmax."
>instead of feeding the entire solution back for refinement, have the authors consider only improving a subpart of the solution and fixing the rest
We completely agree with the intuition, and in fact, our model effectively incorporates this idea via the Variable Subset Selection step (Section 3.2, line 203). At each iteration, only a subset of variables is selected for update, mimicking local search behavior. We found that this strategy significantly improves generalization compared to updating the full assignment each time, as discussed in Section 4.4. | Summary: The paper introduces an iterative, local search approach towards solving constraint satisfaction problems with transformer models and self-supervision.
A new encoding scheme and the self-supervised training scheme are introduced plus a differentiable approximation for the violation of key constraints. The main aspect is the combination of being able to differentiate the approximate constraints and the 1-step prediction approach of the transformer with an overall iterative solving loop.
Experiments are performed and show the effectiveness of the approach.
## update after rebuttal
I thank the authors for their response and their effort during the rebuttal. I maintain my score.
Claims And Evidence: The paper claims to avoid bottlenecks available in other approaches (supervised + RL) and does so through the self-supervision approach.
This is not novel (RL does it somehow), but it's more efficient because there is differentiation.
It is more limited though, because it needs support for every constraint to be differentiable. Seeing the approximations proposed in the paper, these differentations are a conceptualization of the constraint violations, which are commonly used in constraint-based local search too to judge the quality of an improvement, and should be derivable for every constraint. So, at this stage it's mostly a matter of defining them for all constraints and, maybe, there can even be a more general form derived by looking the overall CSP. In conclusion, the claim is sufficiently supported.
Methods And Evaluation Criteria: The selected methods make sense. A CSP is inherently either a graph or a sequence and using a transformer model is one way to model it in the ML paradigm (graph-based approaches being the other major one currently studied in the literature).
In terms of datasets, the selection is okay and in line with the literature. In CSP practice there are more complex and larger datasets available, which should be considered for future work and would really put a test on the scalability of the approach -- but not necessarily required here.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Experiments make sense. Baselines are okay. There is a lot of work in this area and other baselines might be suitable as well besides ANYCSP, but including OR-Tools is the most relevant one and they are sufficient to serve the point.
Supplementary Material: No
Relation To Broader Scientific Literature: Connections to the relevant areas (CP, ML, ML4CO) are made and supported with relevant references.
Essential References Not Discussed: I'm not aware of any missing literature.
Other Strengths And Weaknesses: This is a really good paper with a strong contribution. The proposed encoding and training scheme is sufficiently simple to be usable and adaptable for other settings and by relying on CP modelling for the CSP problems it can be very powerful.
Other Comments Or Suggestions: - The results for nurse scheduling were a bit lost in the text, maybe you can include them in Table 3
Questions For Authors: - Have you evaluated the performance when decomposing the global constraints? How does the model balance having more variables vs more complex global constraints to learn?
- Can you transfer learn, i.e. pretrain the model for individual constraints first and then merge them towards more complex problems?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We sincerely appreciate the encouraging feedback and thoughtful comments.
> self-supervision is more limited because it needs support for every constraint to be differentiable.
You're absolutely right to point out that a key limitation of the self-supervised approach is the need to define differentiable approximations for each constraint. As you noted, our method draws directly from the constraint-based local search literature [1], where constraint violations are quantified using "violation degrees" to guide iterative improvement. Our contribution builds on this idea by adapting it into a differentiable, neural-friendly form that allows gradient-based learning.
[1] Michel, L., Hentenryck, P.V. (2018). Constraint-Based Local Search. In: Martí, R., Pardalos, P., Resende, M. (eds) Handbook of Heuristics.
>In CSP practice there are more complex and larger datasets available, put a test on the scalability of the approach.
This is a valuable point and was also raised by other reviewers. In response, we have added new experiments on the MAXCUT problem (see our response to Reviewer tcHx). These instances include thousands of variables and constraints and serve as a stronger test of scalability. The results indicate that ConsFormer remains competitive, even under larger problem sizes. While our ultimate goal is to apply ConsFormer to a wide range of CSPs, we believe that the experimental results on four benchmark problems with different combinatorial structures provide ample evidence for the promise of our method.
>The results for nurse scheduling were a bit lost in the text
Thank you for raising this, we will revise the paper to highlight the results better.
>Have you evaluated the performance when decomposing the global constraints? How does the model balance having more variables vs more complex global constraints to learn?
This is an excellent question. To investigate, we conducted an additional experiment on Sudoku where the all-different constraints are decomposed to pairwise inequalities (36 inequality constraints per all-different). This can be effectively achieved by defining the continuous penalty as follows:
$$AllDifferent(x_1, \ldots, x_n) \rightarrow \sum_{i\neq j, i< j} \left( \sum_{k} x_i^{(k)}\cdot x_j^{(k)} \right)$$
We present the instance solved percentage below:
|#Iterations|AllDifferent|Decomposed|
|-|-|-|
|1K|59.22|58.47|
|2K|65.88|65.91|
|10K|77.74|77.96|
Interestingly, we observe that in this case, decomposing the constraints does not significantly affect performance which suggests that the model can equally handle a larger number of simpler constraints. However, we note that the resulting number of constraints is still relatively small after decomposing alldifferent for sudoku. Other more complex problems with global constraints may reveal different trade-offs which we leave for future work.
>Can you transfer learn, i.e. pretrain the model for individual constraints first and then merge them towards more complex problems?
This is a great insight and is something we are planning as a future direction! One limitation of our current approach (as well as other neural approaches) is the need to train a new model for a new problem domain that consists of a different combination of constraints. With the success of the transformer’s pre-train + fine-tune approach in the natural language setting [2], one could imagine this approach applied to our setting, where models are first pre-trained for all constraints, then combined modularly to be fine-tuned for new problems. Despite the potential promise of this direction, there are many open questions to resolve in future work in order to bridge from the current model to this ideal goal of a highly general and transferable pretrained model. We will add this excellent future work discussion on revision.
[2] Radford, Alec, et al. "Improving language understanding by generative pre-training." (2018).
---
Rebuttal Comment 1.1:
Comment: Thank you for your response to my comments and for providing additional results!
I remain convinced that this is a great contribution and maintain my score. | Summary: The paper proposes training a transformer architecture in a self-supervised manner to solve constraint satisfaction problems. The input to the model is an assignment of values to the variables and the output is a refined assignment. The assignment is encoded as tokens and the constraints of the problem are encoded as relative positional encodings. The loss used to train the model is a continuous proxy for constraint satisfaction. Each type of constraint has its own proxy and then a weighted combination is taken to compute the final loss.
The approach shows competitive results in sudoku solving and CSPs like graph coloring and the nurse scheduling problem.
Claims And Evidence: See my other comments.
Methods And Evaluation Criteria: The evaluation makes sense and the CSP problems that were picked for the evaluation are reasonable.
Theoretical Claims: N/A
Experimental Designs Or Analyses: While the choice of benchmarks in the paper is sensible, I think the overall experimental evaluation and the ablations are lacking for a paper whose content is almost exclusively empirical. Below I provide a list of issues that I have with the evaluation in the paper:
- The paper compares the proposed transformer approach against Anycsp on particular problems like k-coloring but does not provide some of the other comparisons that could be done with Anycsp. For instance, Anycsp has really strong results on max-cut and also performs pretty well on SAT so seeing how this transformer approach stacks up on those benchmarks would be nice. Another candidate benchmark can be found in [1]. These are CNF-sat instances coming from different combinatorial problems.
- Another comment related to the point above has to do with the scalability of the approach. Most of the results provided are evaluations on smaller instances. Coming back to the max-cut example from Anycsp, there are instances there with several thousands of edges (constraints). Can the proposed transformer approach scale up to that? If not, what are the current limitations and are there ways that they could be addressed?
1. Li, Zhaoyu, Jinpei Guo, and Xujie Si. "G4SATBench: Benchmarking and advancing sat solving with graph neural networks." arXiv preprint arXiv:2309.16941 (2023).
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper aims to provide a new self-supervised pipeline for constraint satisfaction problems. As I mention in other places in this rebuttal this has been explored with other architectures before. The loss function design for constraint satisfaction aspect has also been explored in the literature in the self-supervised setting.
The main innovation here is to create a neural solver that relies on a powerful architecture like the transformer. This is obviously worth exploring since transformers have been extremely successful and so far there hasn't been a particularly compelling transformer architecture for these kinds of problems in the literature.
Essential References Not Discussed: Below I provide a non-exhaustive list of references that should be mentioned in the context of constraint satisfaction.
For CSPs and constraint satisfaction:
- (self-supervised architecture for CSPs) Toenshoff, Jan, et al. "Graph neural networks for maximum constraint satisfaction." Frontiers in artificial intelligence 3 (2021): 580607.
- (cardinality constraints) Wang, Runzhong, et al. "LinSATNet: the positive linear satisfiability neural networks." International Conference on Machine Learning. PMLR, 2023.
- (on using gnns for CSPs) Yau, Morris, et al. "Are Graph Neural Networks Optimal Approximation Algorithms?." Advances in Neural Information Processing Systems 37 (2024): 73124-73181.
For SAT:
- (Self-supervised sat solving) Ozolins, Emils, et al. "Goal-aware neural SAT solver." 2022 International joint conference on neural networks (IJCNN). IEEE, 2022.
- (RL for satisfiability without labels)Yolcu, Emre, and Barnabás Póczos. "Learning local search heuristics for boolean satisfiability." Advances in Neural Information Processing Systems 32 (2019).
For loss function design:
- (continuous relaxation/extension design) Karalias, Nikolaos, et al. "Neural set function extensions: Learning with discrete functions in high dimensions." Advances in Neural Information Processing Systems 35 (2022): 15338-15352.
- (for combinatorial constraints in self-supervised learning )Bu, Fanchen, et al. "Tackling prevalent conditions in unsupervised combinatorial optimization: Cardinality, minimum, covering, and more." arXiv preprint arXiv:2405.08424 (2024).
- (loss function design for self-supervised constrained optimization) Karalias, Nikolaos, and Andreas Loukas. "Erdos goes neural: an unsupervised learning framework for combinatorial optimization on graphs." Advances in Neural Information Processing Systems 33 (2020): 6659-6672.
Other Strengths And Weaknesses: - It is unclear to me what the purpose of the Gumbel-Softmax approach is when predicting the value for the variables. Why is the gumbel noise added? Isn't the softmax sufficient? Does the randomness somehow help? If yes, an ablation study is appropriate. The choice is not discussed in the paper so it's hard to figure out what it could achieve.
Overall, I find this direction promising but the experimental evaluation is inadequate so I cannot recommend accepting this. The writing could also be improved by providing a more detailed discussion around the questions I have raised in this review.
Other Comments Or Suggestions: N/A
Questions For Authors: - There has been work on encoding cardinality cardinality constraints (and other constraints) into the loss functions of neural networks [1]. The specific choice of functions that are used as proxies for constraint violation in the loss seems ad-hoc. Have the authors compared with other techniques from the literature? For example there are various published works on loss function design for self-supervised constrained optimization [1,2]. Have the authors compared those techniques to the ones proposed here? Using a more ad-hoc approach is fine it works but I think it merits more discussion in the paper because there are different ways of encoding those constraints and their performance may vary.
1. Bu, Fanchen, et al. "Tackling prevalent conditions in unsupervised combinatorial optimization: Cardinality, minimum, covering, and more." arXiv preprint arXiv:2405.08424 (2024).
2. Karalias, Nikolaos, and Andreas Loukas. "Erdos goes neural: an unsupervised learning framework for combinatorial optimization on graphs." Advances in Neural Information Processing Systems 33 (2020): 6659-6672.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We greatly appreciate your thorough feedback and have conducted new experiments which we believe have substantially improved the paper.
> Additional benchmark.
We have adapted ConsFormer for MAXCUT based on your suggestion.
- MAXCUT is the problem of partitioning nodes of a graph into two sets in a way that maximizes the size of the cut.
- Following ANYCSP, we train on graphs with 100 vertices and test on GSET instances with 800 to up to 10000 vertices.
We report the absolute and relative gap (in percentage) to best known values:
|Method|V=800|V=1K|V=2K|V≥3K|
|-|-|-|-|-|
|Greedy|411.44(5.26)|359.11(6.64)|737.00(6.81)|774.25(6.30)|
|SDP|245.44(3.14)|229.22(4.24)|-|-|
|RUNCSP|185.89(2.38)|156.56(2.90)|357.33(3.30)|401.00(3.26)|
|ECO-DQN|65.11(0.83)|54.67(1.01)|157.00(1.45)|428.25(3.49)|
|ECORD|8.67(0.11)|8.78(0.16)|39.22(0.36)|187.75(1.53)|
|ANYCSP|1.22(0.02)|2.44(0.05)|13.11(0.12)|51.63(0.42)|
|ConsFormer|24.44(0.31)|18.22(0.34)|47.00(0.43)|155.88(1.27)|
|OR-Tools|143.89(1.84)|112.78(2.09)|365.89(3.38)|378.62(3.08)|
We observe that while ANYCSP remains the best performing approach on GSET, ConsFormer achieves a relative gap of 0.31% to 1.27% on average without extensive model tuning, showcasing its ability to scale up to larger problems with thousands of constraints.
> ablation study on the Gumbel-Softmax.
We chose Gumbel-Softmax for its differentiable sampling of discrete variables, aligning with the discrete nature of CSPs. Experiments during development showed it outperformed standard softmax, which we confirmed through a more systematic ablation below.
However, your question prompted us to further investigate why Gumbel-Softmax helps: is it the stochasticity introduced by the Gumbel noise, or simply the sharper outputs it produces? To investigate this, we ran additional experiments using softmax with added temperature control – a variant we had not systematically explored in earlier versions ($Softmax_\tau(z_i) = \frac{\exp\left(z_i / \tau\right)}{\sum_{j} \exp\left(z_j / \tau\right)}$
), and present the results below (for Sudoku/Graph Coloring, we show the % instances solved, MAXCUT reports the absolute gap to best known values):
||Gumbel-Softmax|Softmax|Softmax w/T|
|-|-|-|-|
|Sudoku|100|100|100|
|Sudoku-hard|77.74|73.72|**85.67**|
|Graph-Coloring-5 V=50|**78.16**|74.91|77.33|
|Graph-Coloring-5 V=100|42.50|35.33|**42.66**|
|Graph-Coloring-10 V=100|52.60|53.00|**53.66**|
|Graph-Coloring-10 V=200|11.92|12.75|**12.92**|
|MAXCUT V=800|**24.44**|126.56|123.11|
|MAXCUT V=1K|**18.22**|56.89|58.33|
|MAXCUT V=2K|**47.00**|135.11|123.67|
|MAXCUT V≥3K|**155.88**|305.25|287.38|
Interestingly, softmax with temperature yielded similar or better results than Gumbel-Softmax on problems with smaller sizes, while performing worse on larger problems from MAXCUT. This suggests that while the sharper distributions indeed boost model performance, the stochasticity allows the model to generalize better across larger problem instances. The randomness can promote diversity in intermediate solutions which may help the model escape local optima over multiple inference steps. We will include this ablation in the updated paper.
> additional references
We appreciate the extensive list of references and will include them accordingly. We note that this list of references echoes our own finding which is that most existing approaches focus on graph-based representations or SAT formulations. To the best of our knowledge, Transformers have not been effectively implemented for general form CSPs.
> constraint violation functions seems ad-hoc and merits more discussion in the paper.
Our approach is grounded in principles from the constraint-based local search literature in CP [1]. Here, constraints are associated with “violation degrees” where violation degree = 0 ⇔ satisfied. Specific functions to evaluate violation degrees are designed for different global constraints.
Selecting the design for the continuous penalty is indeed important. During development we experimented with various continuous approximations including some inspired by recent work using T-norm [2] and Entropy [3] with varying effectiveness.
However, our goal is to showcase the effectiveness of the self-supervised approach combined with the single-step transformer. We view the choice of continuous penalty functions as a flexible and modular component of our framework, one that can benefit from further improvements, but is not the central focus of this work. We will include more discussion on this in the updated paper.
[1] Michel, L., Hentenryck, P.V. (2018). Constraint-Based Local Search. In: Martí, R., Pardalos, P., Resende, M. (eds) Handbook of Heuristics.
[2] Giannini, Francesco, et al. “T-norms driven loss functions for machine learning.”
[3] Chen, Di, et al. “Deep reasoning networks: Thinking fast and slow.”
---
Rebuttal Comment 1.1:
Comment: OK, thank you for the update. I will raise my score but I am still not completely satisfied with the experiments here. The GSET results look promising. I would like to see how long it takes to train/test for those. You mention that it takes 6-10 hours to train for your original experiments. From your response to the other reviewer, it sounds like it's a similar amount of time. What about memory? I think showing more results for the CNF instances I brought up and doing a study on size generalization would make the case for this paper stronger.
By size generalization, I mean: suppose you pick hard 3-CNF random sat instances around the critical threshold (~4.25). How large are the instances that you can solve? This includes discussing the memory cost/scalabity of the approach, the scale X of instances of the training set needed (also how many) to do well at a scale of Y. Scale as measured in number of clauses or variables.
I will bump my score up because the paper shows the potential of training a transformer architecture in self-supervised style for combinatorial problems, but I think those kinds of experiments are essential for papers with an empirical focus. I am willing to propose acceptance for this as long as the authors commit to providing more of the results I suggested (say in the final version of the paper if not possible soon).
---
Reply to Comment 1.1.1:
Comment: Thank you for your continued engagement and suggestions.
> I would like to see how long it takes to train/test for those.
Due to time constraints of the rebuttal, we ran limited hyper-parameter search and used relatively smaller models. The reported results are based on a model trained for under three hours (wall clock) with 3 attention heads and 4 transformer layers. The testing was conducted following the same procedure and time limits as ANYCSP for GSET.
> What about memory?
The models were trained on the same type of single-GPU nodes, each with 32 GB of CPU memory and between 12 GB and 32 GB of GPU memory, depending on the allocated GPU.
> I think showing more results for the CNF instances I brought up and doing a study on size generalization would make the case for this paper stronger.
We agree that a study on size generalization would be insightful and we will conduct the suggested experiments for the final version. Specifically, we plan to do the following:
- Adapt our framework to 3-SAT using one-hot encoded binary variables
- Explore differentiable loss functions for SAT such as the log loss used by QuerySAT[1] and various T-Norm driven loss functions (Gödel, Łukasiewicz, Product, etc)[2].
- Report the 3-SAT performance in comparison to ANYCSP.
- Conduct size generalization study as suggested by the reviewer.
[1] Ozolins, Emils, et al. "Goal-aware neural SAT solver." 2022 International joint conference on neural networks (IJCNN). IEEE, 2022.
[2] Giannini, Francesco, et al. "T-norms driven loss functions for machine learning." Applied Intelligence 53.15 (2023): 18775-18789. | null | null | null | null | null | null |
FlipAttack: Jailbreak LLMs via Flipping | Accept (poster) | Summary: The authors studied a simple yet effective jailbreak method to attack recent state-of-the-art LLMs in one query. They exploit the auto-regression of LLMs and introduce the left-side perturbation to the text. Four flipping methods are proposed to disguise the harmful content and fool the LLMs. The proposed method is universal, stealthy, and simple. Experiments demonstrate that the proposed method FlipAttack can achieve promising performance.
## update after rebuttal
My concerns are addressed, and I maintain my positive score.
Claims And Evidence: Yes, the claims are supported by experiments and analyses.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are reasonable.
Theoretical Claims: They don’t provide the theoretical claims.
Experimental Designs Or Analyses: Yes. They design experiments to demonstrate the attack performance, the efficiency of FlipAttack. They analyze the effectiveness of the flipping modes, different parts in FlipAttack. They analyze the deep-in reasons for success of FlipAttack. These designs and analyses are valid.
Supplementary Material: Yes. The cases studies and additional experiments in Appendix.
Relation To Broader Scientific Literature: They propose a simple yet effective method to jailbreak LLMs via 1 query. It is universal, stealthy and simple. They analyze the reasons for the success and provide the insights for defense.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: **Strengths**
- The motivation (attacking LLMs via 1 query) is clear, and the methodological designs (starting from the property of autoregression) are reasonable.
- The core idea of adding left-side perturbation is novel. Although the method is straightforward, the analyses of designs are sufficient, and the performance is promising.
- The experiments are very comprehensive, demonstrating the performance, efficiency of the proposed method. The reason analyses in section 4.3 and the case studies in the appendix are interesting.
- The paper survey is comprehensive in the related work part.
**Weaknesses**
- Analyzing Figure 4 reveals inconsistent conclusions. For example, CoT helps to improve the attack success performance in Figure (a) (f), but in Figure (e), CoT leads to a significant drop in performance.
- The authors merely conduct experiments on API-based LLMs but overlook the open-sourced models.
- The proposed defense strategies are straightforward but seem not very effective, as shown in Table 21. How to defend the proposed method effectively?
- Is the proposed method a cipher-based jailbreak method? If not, what’s the main difference between the cipher-based jailbreak methods.
Other Comments Or Suggestions: 1. In Line 16, "that they struggle to comprehend the text when the perturbation is added to the left side" -> "that they struggle to comprehend text when perturbations are added to the left side"
2. In Line 78, "the success rate of 94.04% on GPT-4 Turbo and 86.73% on GPT-4" should clarify that "the success rates are 94.04% for GPT-4 Turbo and 86.73% for GPT-4."
Questions For Authors: 1. What’s the strength of the proposed method compared to the cipher-based jailbreak attacks? I think the text flipping may be one kind of ciphers.
2. How to defend the proposed attack effectively except the two naive defend methods mentioned in the original paper?
Ethics Expertise Needed: ['Other expertise']
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your insightful and constructive review. We response to each question as follows. Following your suggestion, **all modifications will be added to the final paper.**
**Inconsistent Conclusions**
- For different LLMs, **their abilities are different**, which may lead to different conclusions.
- For example, like GPT 3.5 Turbo, CoT **helps it to better understand** the instruction and recover the flipped text. Meanwhile, CoT does not make it recognize the harmful intent in the prompt.
- Besides, for GPT-4o mini, CoT may help it to recognize the harmful intent in the prompt, thus decreasing the attack success rate.
**Open-sourced Models**
- We conducted experiments on open-sourced models like **LLaMA 3.1 405B** and **Mixtral 8x22B** in the original paper.
**How to Defend**
- We can utilize a **reasoning-base guard model** to defend such flipping-based attacks.
- We can further enhance the **text understanding ability** of the LLM itself and help it to **recognize the harmful intent** in the attack.
- We use some **non-autoregressive methods** like your mentioned LLaMA to alleviate the autoregressive nature.
**Cipher-based Method**
- Our proposed method is not based on the cipher. Our method is based on the analyses of the **autoregressive nature** of the LLMs.
- The existing cipher-based methods are **typically complex and limit their efficiency and effectiveness**. Differently, our method is simple yet effective.
- Unlike ciphers, which focus on **hiding content via transformations**, text flipping in the proposed method likely affects the semantic and structural aspects of the input rather than just encoding.
- Cipher attacks rely on encoding that can be **decoded back into a harmful prompt**. The flipping strategy may instead exploit how LLMs interpret and prioritize different parts of the input.
**Minors**
- We will **fix these typos** in the final version.
---
Rebuttal Comment 1.1:
Comment: I have read the authors’ rebuttal, and my problem has been solved. I decide to keep my score.
---
Reply to Comment 1.1.1:
Comment: Thanks for your support. We will further improve the quality of our paper according to your suggestions in the final version. | Summary: The paper introduces FlipAttack, a novel jailbreak attack method designed for black-box large language models (LLMs). The authors analyze the autoregressive nature of LLMs, revealing that they struggle to comprehend text when perturbations are placed on the left side. Based on this insight, they propose a method to disguise harmful prompts by applying left-side perturbations, leading to four distinct flipping modes. The effectiveness of FlipAttack is demonstrated through extensive experiments on eight different LLMs. The paper claims that FlipAttack is universal, stealthy, and can jailbreak LLMs with a single query.
Claims And Evidence: Yes, they provide clear and convincing evidence for the claims.
Methods And Evaluation Criteria: Yes, they make sense.
Theoretical Claims: This paper has not theoretical claims.
Experimental Designs Or Analyses: Yes. The designed experiments are valid. Table 1, 2 demonstrates the superiority of the proposed method. Figure 3 shows efficiency. Figure 4 shows the effectiveness of different modules. Table 3,4,5 analyze the underlying mechanisms of FlipAttack.
Supplementary Material: Yes. Appendix is provided. The link of anonymous codes shows the repository is not found.
Relation To Broader Scientific Literature: Safe of large language models. This paper provides an effective and efficient method to jailbreak LLMs and provides the underlying understanding of the attack.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Pros:
1. The proposed method is simple yet effective, requiring merely one query for jailbreaking LLMs, while most existing methods require multiple queries or an ensemble.
2. The concept of using left-side perturbations to exploit the autoregressive nature of LLMs is quite interesting, which may make this attack hard to defend.
3. The experiments are solid, with significant performance improvement and detailed analysis. The explainable reasons for the successful attack improve the reliability of attacks and provide potential ideas for defending such attacks.
4. The supplementary material is sufficient, and the codes are open-sourced.
Cons:
1. The appendix is too long and contains many case studies, which are not very necessary and limit the readability. Besides, the cases have already been provided in the given code repository.
2. Missing theoretical analyses of the proposed method. Although for such a practical jailbreak, theoretical analysis may be difficult to propose, coming up with theoretical principles for attacks will significantly improve the quality of the paper.
3. On LLaMA 3.1 405B, the attack success rate is limited. While other baselines are not effective, authors should conduct more analyses and provide explanation.
4. Missing citation on recent awesome defense method against jailbreak. [1] I’m curious can the proposed method break this interesting constitutional classifier?
[1] Constitutional Classifiers: Defending against Universal Jailbreaks across Thousands of Hours of Red Teaming
Other Comments Or Suggestions: 1. Keep the overall pages of the whole paper and improve the readability.
2. In section 4.3, the random words like ``Q@+?2gn’’ limit the readability and reduce quality of the paper. Maybe you should move these detailed cases to the appendix.
3. Figure 1 is too small.
Questions For Authors: 1. I’m curious how to defend such flipping attack successfully. Because it is from the nature of the LLMs, autoregression, all of the current sota LLMs are autoregressive, how to defend it from the root.
2. Can the proposed method attack some recent diffusion-based LLMs like LLaDA [1]
[1] Large Language Diffusion Models
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your insightful and constructive review. We response to each question as follows. Following your suggestion, **all modifications will be added to the final paper.**
**Theoretical Analyses**
- Jailbreak attack is a **practical direction**.
- We provide **empirical analyses** to demonstrate the **universality** and **stealthy** of the proposed method.
- We will conduct **theoretical analyses on the autoregressive nature** of the method in the future.
**LLaMA 3.1 405B**
- It **instruction following ability** is limited, and can not recover the flipped sentence and execute the harmful intent.
- The **safety alignment** of this mode is better than close-sourced models.
- Other baselines also achieve **unpromising performance** on this model.
**Constitutional Classifiers**
- Please note that this paper is online on **31 Jan 2025**, while the deadline of ICML 2025 is on **09 Jan 2025**. We regard it as **a concurrent work**.
- We will **discuss constitutional classifiers** in detail in the final version.
- We will test our method on it **once constitutional classifiers are open-sourced or the API is available**.
**How to Defend**
- We can utilize a **reasoning-base guard model** to defend such flipping-based attack.
- We can further enhance the **text understanding ability** of the LLM itself and help it to **recognize the harmful intent** in the attack.
- We use some **non-autoregressive methods** like your mentioned LLaMA to alleviate the autoregressive nature.
**LLaDA**
- Please note that this paper is online on **14Feb 2025**, while the deadline of ICML 2025 is on **09 Jan 2025**. We regard it as **a concurrent work**.
- We will **discuss these kinds of non-autoregressive models** in detail in the final version.
**Minors**
- We will **move the case studies into the code** in the final version.
- We will **make Figure 1 larger** in the final version.
- The random words are generated randomly in our experiment. We will **make it more readable** in the final version.
---
Rebuttal Comment 1.1:
Comment: Thanks. I notice the constitutional classifier and LLaDA are indeed concurrent work. Thus, there is no need to compare them in performance. The provided defense strategies are interesting.
As the authors addressed my concerns, I would consider raising my score based on the following reasons. 1) The method is stealthy and universal, achieving a promising attack success rate within 1 query. 2) The empirical analyses of left-side perturbations help authors better understand the mechanism of jailbreaking. 3) The experiments are solid and the codes are available.
---
Reply to Comment 1.1.1:
Comment: Thanks for your support. We will further improve the quality of our paper according to your suggestions in the final version. | Summary: The authors propose FlipAttack, which encodes malicious prompts by reordering words or characters and relies on the reasoning capabilities of the LLM s.t. it can decipher the prompt. The authors demonstrate empirically that this procedure often bypasses the guardrails, is very efficient, and is difficult to detect.
Claims And Evidence: The left-side perturbation claim is somewhat unclear. First, the perplexity experiment (Table 5) is at most suggesting something along these lines. Second, the perturbations are not really happening on the left side of the prompt; rather, the malicious request is reordered in its entirety. Solely due to the "flipping guidance" one might argue that the attack happens early in the prompt.
Other than that, the attack achieves convincing attack success rates, demonstrating that the chosen perturbations and prompt templates are reasonable.
However, some claims like "FlipAttack jailbreaks LLMs with merely 1 query" are unclear since the details of FlipAttack remain vague.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Not applicable.
Experimental Designs Or Analyses: It is unclear how the FlipAttack works specifically. The main details of the experimental setup should be in the main body of the work. The authors defer all details to the appendix.
Supplementary Material: Skimmed appendix.
Relation To Broader Scientific Literature: The authors propose a simple and efficient encoding of malicious requests via reordering words and characters. Yet, FlipAttack's pertiurbations are very effective.
Essential References Not Discussed: E.g., "left side attacks" have also been studied by COLD [A] and PGD [B]. [A] also studies more flexible rewrites.
[A] Guo et al. "COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability" 2024
[B] Geisler et al. "Attacking Large Language Models with Projected Gradient Descent" 2024
Other Strengths And Weaknesses: Further weaknesses:
1. The paper remains vague on the details of FlipAttack. The authors propose four rewrites ("attack disguise") and four different templates ("flipping guidance"). In the actual attacks, do the authors try all 16 combinations? How do the authors come up with claims like "FlipAttack jailbreaks LLMs with merely 1 query"? It would be good to provide, e.g., pseudo-code.
1. Essential setup details are missing in the main body
1. It is unclear how inserting gibberish early in a prompt vs. late in the prompt relates to a conclusion w.r.t. left to right understanding.
1. The use of "stealthy" is quite distracting and not really explained until the experimental section.
1. The use of "universality" is unclear.
Other Comments Or Suggestions: 1. The set notation for harmful request $\mathcal{X} = \{x_1,x_2,\dots\}$ is odd as it implies that tokens would be unordered.
1. Table 1: "white-box attack" is misleading as these are transfer attacks from a different victim model
1. Line 184. It should probably be "keeping **it** universal"
1. The header contains "Submission and Formatting Instructions for ICML 2025" instead of the paper title
1. Figure 1: text too small
1. Abstract: terms like "flipping modes" are unclear
Questions For Authors: See above as well.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thanks for your insightful and constructive review. We response to each question as follows. Following your suggestion, **all modifications will be added to the final paper.**
**Left-side Perturbation**
- Left-side perturbation is the **principle idea** of our proposed FlipAttack. It merely helps the readers **better understand how FlipAttack works** and motivates researchers to defense against such an attack.
- Concretely, our flipping modes can **be regarded as iteratively adding left-side perburbation** to the original prompt, disguising the harmful intents.
- To help you understand this process, we created a GIF to **figure out the workflow** of our method in https://anonymous.4open.science/r/ICML25-7156-FlipAttack-78D5/flipattack_overview.gif.
- For the experiments regarding perturbations that really happen on the left side of the prompt, please see **Figure 7**.
- **'Flipping guidance' is used to help the LLM flip back the flipped prompt and execute the harmful behaviors**. We will revise to **clarify the role of 'flipping guidance'** and alleviate misguiding readers.
**Jailbreak LLM with merely 1 query**
- In Line 106 of the original paper, we mentioned that `They utilize iterative refinement and cost a large number of queries.`
- This means the promising performance of the **existing jailbreak attacks relies on multiple queries** to victim LLMs and iterative refinement, e.g., RENELLM, GPTFUZZER, and TAP, leading to **high costs**. Also, it **limits their applicability** since the multiple abnormal queries will decrease the stealthiness of the attack.
- Differently, our method **can jailbreak LLMs** merely **using 1 query to the victim LLM**, improving the **efficiency and applicability**.
**Details of FlipAttack**
- For the **mode selection**, we have already listed them in **Lines 743-748 of the original paper**.
- Besides, we have already conducted **ablation studies** on them. See **Figure 4 and Figure 5 of the original paper**.
- In the actual attack, for the tested LLMs in this paper, the attackers can just **adopt our combination settings**. For a new LLM out of this paper, the attackers can conduct ablation studies on the combinations or just use a default combination to attack the LLM.
- We will **move more details into the main body** in the final version.
- We already provide the **excusable code** in https://anonymous.4open.science/r/ICML25-7156-FlipAttack-78D5/README.md
**Related Work**
- For COLD-Attack, we have already **discussed and compared it in the original paper**. Please check Lines 79, 283, and 420. Besides, **COLD-Attack is a white-box attack** method while our proposed method is a black-box attack method.
- And **[B] is also a white-box attack**. As they claimed in their paper: `Additionally, we did not conduct experiments against AI assistants deployed for public use, like ChatGPT, Claude, or Gemini. Nor is our attack directly applicable to such models due to the white-box assumption`. We will add a discussion of this paper in the white-box attack part.
**Relationship Between Left-side Perturbation and Left to Right Understanding**
- First, we invest that the LLMs tend to understand the text from left to right.
- Then, based on this finding, we aim to destroy the understanding ability of LLMs on the harmful prompts.
- One effective way is to add noisy text on the left side of the harmful prompt. Adding noise in the right side will influent less.
- We use the harmful prompt itself to construct this noise via flipping.
- The whole workflow can be found in **Left-side Perturbation**.
**Stealthy**
- It means the **attack is hard to be detected** by victim LLM itself or the guard model.
- It is a **common term for jailbreak attacks** [1,2].
- We will **add more explanation of stealthy in the early part** of the final paper.
[1] Stealthy Jailbreak Attacks on Large Language Models via Benign Data Mirroring
[2] Gpt-4 is too smart to be safe: Stealthy chat with llms via cipher
**Universality**
- As we mentioned in Line 47 of the original paper, `First, to make our proposed method universally applicable to state-of-the-art LLMs, we study their common nature, i.e., autoregressive, `.
- Universality means the proposed **attack method can be effective for all LLMs**.
- Since **our design is from the autoregressive nature of the LLMs**, they **are all vulnerable to our FlipAttack**.
- We will **add more explanation of universality in the early part** of the final paper.
**Minors**
- We will modify the notation. `Given a harmful request X = (x₁, x₂, …, xₙ) as an ordered sequence of n tokens.`
- In Table 1, we will explain `white-box attack` as transfer attacks in the title.
- We will fix Line 184 and modify it as `keeping it universal`.
- We will remove Submission and Formatting Instructions for ICML 2025 and use our title.
- We will make the text in Figure larger.
- We will give a detailed definition of flipping modes in the abstract. | null | null | null | null | null | null | null | null |
Foundation Molecular Grammar: Multi-Modal Foundation Models Induce Interpretable Molecular Graph Languages | Accept (poster) | Summary: This paper introduces Foundation Molecular Grammar (FMG). This approach uses a multimodal large language model to identify meaningful substructures in molecular graphs, generating an interpretable grammar for molecule generation. By rendering molecules as images and prompting the model with specialized prompts, FMG enforces chemically valid decomposition steps and captures motifs without heavy expert annotation.
Claims And Evidence: While the paper repeatedly highlights the novel idea of using a multi-modal foundation model (MMFM) to induce a Foundation Molecular Grammar(FMG), it is still not evident how this method solves a pressing issue in molecular discovery workflows.
Methods And Evaluation Criteria: The paper evaluates the proposed method on small, specialized monomer datasets plus two real-world datasets (HOPV, PTC). However, the field of molecular generation typically uses larger, more standard benchmarks (e.g., ZINC250k, MOSES) to compare performance comprehensively. Without these broader benchmarks or additional state-of-the-art methods, the generality and competitiveness of the proposed approach remain unclear.
Theoretical Claims: There are no theoretical proofs in the paper.
Experimental Designs Or Analyses: The authors emphasize interpretability as a major advantage of using FMG. However, the paper only provides limited examples in Table 3 of the interpretability in action—mostly brief snippets. A deeper discussion or demonstration (e.g., a domain-expert walkthrough showing how the FMG clarifies or improves the design process) would reinforce the claim that this approach is uniquely transparent or insightful.
Supplementary Material: Yes, I did review the appendix. They are mainly case studies and examples.
Relation To Broader Scientific Literature: Using MMFM in graph generative tasks is novel and sounds promising.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: NA.
Other Comments Or Suggestions: The notation in tables needs to be well-defined. For example, what does each column mean in the table?
Questions For Authors: Please refer to previous sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: *The authors emphasize interpretability as a major advantage of using FMG. However, the paper only provides limited examples.... A deeper discussion or demonstration (e.g., a domain-expert walkthrough...) would reinforce the claim that this approach is uniquely transparent or insightful.*
* The examples in Table 3 are actually drawn from one of the five full example domain-expert walkthroughs in Appendix D, where the expert critiques FMG’s reasoning step-by-step. Additionally, in Appendix D.6, we include concluding thoughts delivered by the expert, highlighting LLM agents’ strong performance in substructure extraction and limits in harder tasks needing expert intuition. The modular nature of the decomposing process is key to FMG’s interpretability, and the case studies highlight how FMG makes this clearer.
**Additional Case Study:** As both you and V4pu suggested, we ran another case study where an expert contrasted discrepant decompositions by their rationale. Due to the short rebuttal window, we only finished this for 11 chain extenders but are actively working on other datasets. The setup is: the expert chooses a (good, bad) pair of decompositions for each molecule and summarizes the rationale. We then ask our LLM-as-a-judge to summarize the key points and decide solely based on the explanations, which decomp. is better. In most cases, the LLM judge identifies the good decomposition, showing how LLM explanations are useful in closing its own design loop.
For example, the expert wrote:
“Analysis A demonstrates a solid understanding of motifs like amide, urea, imidazole, and carbonyl groups—key to chain extenders. It justifies motif 1 as the root based on its role in peptide bond formation. Analysis B neglects important groups and lacks a strong rationale for selecting motif 0 as root.”
GPT similarly judged:
**Defining Motifs:**
Analysis A better identifies functional groups like amides/carbonyls that define chain extender properties. It correctly selects motif 1 for its role in polymer backbones.
**Functional Group Explanation:**
Analysis A explains the contributions of groups to mechanical and processing properties. Analysis B simplifies the motifs and lacks depth in chemical reasoning.
**Decision:**
Analysis A is favored for better understanding and detailed explanation of groups defining chain extenders.
We repeated the 11-molecule test 5 times and found LLM agreement with the expert on (10, 7, 8, 8, 9) runs.
This case study mirrors expert round-table discussions, where competing rationales must be contrasted to make progress. In App. C, we show selecting higher-ranked decompositions judged by the LLM improves class membership; other design goals could benefit similarly.
We hope this presents further quantitative evidence for the usefulness of the explanations.
*While the paper repeatedly highlights the novel idea of using a multi-modal foundation model (MMFM) to induce a Foundation Molecular Grammar (FMG), it is still not evident how this method solves a pressing issue in molecular discovery workflows.... the field of molecular generation typically uses larger, more standard benchmarks to compare performance comprehensively. Without these broader benchmarks or additional state-of-the-art methods, the generality and competitiveness of the proposed approach remain unclear.*
* Thanks for these two comments, which go hand-in-hand. We believe the real issue in molecular discovery is the lack of small, domain-specific benchmarks supporting interpretability and expert input. Standard benchmarks focus on large-scale representation learning, and less so on automating interpretable, expert-knowledge-guided design. On the other hand, domain-specific designs in our experience often come with only a handful of training examples, both because the domain is narrow and because the property values of interest are costly to obtain.
* To evaluate FMG’s generality and competitiveness, we do compare with pretrained and transfer learning models in Tables 1 and 2. We see their performance drops significantly due to failing constraints like synthesizability, class membership while retaining broad coverage. When tackling synthetically accessible chain extenders with amine groups, we believe expert knowledge plays a critical role—but integrating it via annotations traditionally required costly manual labor. FMG seeks to automate this labor, doing the hierarchical decompositions while respecting explicit/implicit constraints, while staying interpretable for expert validation.
*The notation in tables needs to be well-defined. For example, what does each column mean in the table?*
* They are standard metrics used by prior work (e.g. [1,2]). We’ll clarify these notations in the revision.
[1] Data-efficient graph grammar learning for molecular generation. ICLR 2022.
[2] Representing Molecules as Random Walks Over Interpretable Grammars. ICML 2024.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors’ efforts in addressing my concerns; however, they remain only partly resolved. Specifically:
1. Interpretability: Demonstrating interpretability solely through a handful of case studies is insufficient. More extensive experiments are needed to establish the method’s interpretability convincingly.
2. Dataset and Metrics: I continue to believe the study should include larger, commonly used datasets like ZINC250k and incorporate additional evaluation metrics (e.g., FCD, Scaf, NSPDK). These would more thoroughly validate the method and situate its performance relative to established benchmarks.
---
Reply to Comment 1.1.1:
Comment: **More Benchmarking on Interpretability.**
We appreciate your continued engagement and agree evaluating interpretability rigorously is essential. As it is inherently qualitative, we believe expert-guided assessments offer the most meaningful validation. Our goal is to assist domain experts in decomposing and designing molecules more effectively—making their judgment the ground truth for FMG.
To go beyond qualitative discussion, we’ve now completed our quantitative benchmark on the 4 remaining datasets. For each, we randomly selected 15 molecules, generated two candidate decompositions, and asked an expert to select the one with the more chemically sound reasoning. We then prompted our MMFM to make the same choice. Each pair was evaluated twice with flipped order to eliminate bias.
||Isocy.||Acry.||HOPV||PTC||Total|
|-|-|-|-|-|-|-|-|-|-|
|Gold|A|B|A|B|A|B|A|B||
|Score|$6/12$|$9/12$|$12/15$|$13/15$|$9/15$|$13/15$|$6/12$|$9/12$|$77/108~=71$%|
In 6 additional cases, the expert found both designs equally reasonable, which we excluded. Against a random baseline, the MMFM’s agreement rate yields $p=1.1e-5$, indicating statistically significant alignment with expert judgment.
These results reinforce our hypothesis that FMG’s hierarchical, interpretable structure can support downstream decision-making and design critique—even enabling the LLM to act as a self-checking agent in an expert-in-the-loop pipeline.
**Additional Evaluation Metrics.**
In response to your suggestion, we added FCD, Scaf, and NSPDK metrics for all major baselines (ICL, VAE, CLMs), shown below.
|Method|Valid (Avg.)|FCD(↓)|||Scaf(↑)|||NSPDK(↓)|||
|-|-|-|-|-|-|-|-|-|-|-|
|MoLeR (I)|100%|35.63|17.34|26.32|0.0|0.0|0.05|0.19|0.12|0.12|
|GPT4 (ICL)|91%|18.23|7.87|19.33|0.05|0.58|0.55|0.10|0.03|0.08|
|MolT5 (I)|76%|19.88|16.28|40.91|0.0|0.0|0.0|0.51|0.71|0.31|
|Text+ChemT5 (I)|42%|13.78|14.70|24.65|0.08|0.64|0.95|0.27|0.11|0.13|
|FMG|100%|17.67|9.63|22.46|0.0|0.15|0.0|0.13|0.08|0.11|
|Method|Valid (Avg.)|FCD(↓)|||Scaf(↑)|||NSPDK(↓)|||
|-|-|-|-|-|-|-|-|-|-|-|
|MoLeR (I)|100%|35.63|17.34|26.32|0.0|0.0|0.05|0.19|0.12|0.12|
|GPT4 (ICL)|91%|18.23|7.87|19.33|0.05|0.58|0.55|0.10|0.03|0.08|
|MolT5 (I)|76%|19.88|16.28|40.91|0.0|0.0|0.0|0.51|0.71|0.31|
|Text+ChemT5 (I)|42%|13.78|14.70|24.65|0.08|0.64|0.95|0.27|0.11|0.13|
|FMG|100%|17.67|9.63|22.46|0.0|0.15|0.0|0.13|0.08|0.11|
FMG consistently outperforms MolT5 (I) and MoLeR (I). While ICL and ChemT5 show advantages on certain metrics, their low uniqueness (ICL) & validity (ChemT5) raise questions about their practical reliability—echoing our point to V4pu on holistic usefulness.
**Performance on larger benchmarks (ZINC, MOSES).**
While FMG is tailored for expert-in-the-loop, domain-specific design, we agree it’s valuable to evaluate its behavior on broader benchmarks.
Given limited time, we trained FMG on a 1k subset (0.05%) of ZINC250k and evaluated using the MOSES benchmark. We generated 30k samples and computed standard -TestSF metrics. (Details: no data splitting for grammar induction; held-out test set for evaluation; IntDiv2 omitted due to redundancy with IntDiv.)
Results (numbers from MOSES leaderboard):
|Model|Valid(↑)|Unique@1k(↑)|Unique@10k(↑)|FCD-TestSF(↓)|SNN-TestSF(↑)|Scaf-TestSF(↑)|IntDiv(↑)|Novelty(↑)|
|--------------|--------------|--------------|----------------|----------------|----------------|----------------|--------------|----------------|
|Train|1.00|1.00|1.00|0.48|0.59|0.00|0.86|1.00|
|HMM|0.08±0.03|0.62±0.12|0.57±0.14|25.43±2.56|0.38±0.01|0.05±0.02|0.85±0.04|1.00±0.00|
...3 rows omitted...
|CharRNN|0.97±0.03|1.00±0.00|1.00±0.00|**0.52±0.04**|0.56±0.01|0.11±0.01|0.86±0.00|0.84±0.05|
|VAE|0.98±0.00|1.00±0.00|1.00±0.00|0.57±0.03|**0.58±0.00**|0.06±0.01|0.86±0.00|0.69±0.01|
|JTN-VAE|1.00±0.00|1.00±0.00|1.00±0.00|0.94±0.05|0.52±0.01|0.10±0.01|0.86±0.00|0.91±0.01|
|LatentGAN|0.90±0.00|1.00±0.00|1.00±0.00|0.83±0.01|0.51±0.00|0.11±0.01|0.86±0.00|0.95±0.00|
|FMG (0.05%)|**1.00±0.00**|**1.00±0.00**|**1.00±0.00**|26.30±0.41|0.29±0.00|**0.12±0.00**|**0.90±0.00**|**1.00±0.00**|
FMG leads on all five unconditional generation metrics reported in our main paper. That said, we acknowledge its distributional match is weaker, as expected from a model trained on just *0.05%* of the data. However, its high validity, novelty, diversity, and uniqueness demonstrate FMG's potential as a compositional grammar backbone—suitable for downstream optimization (e.g., via MC-REINFORCE as done in DEG).
**Closing.** We will incorporate these extended analyses—on interpretability, standard benchmarks, and broader metrics—into the revised manuscript, and continue to expand FMG’s benchmarking suite.
We hope this update shows that FMG, while targeting a distinct use case, holds up to broader scrutiny and provides a novel, interpretable framework for molecule design. We would be truly grateful if you would consider raising your score and helping to advocate for this contribution.
Warm regards,
FMG authors | Summary: In this paper, the authors show that one can incorporate the “graph grammar” of molecules into a multimodal language model. Essentially, the method, called FMG, (i) takes a molecular graph as an input, (ii) extracts “features” of such a graph, (iii) represent them with images, (iv) ask a multimodal (vision-language) model to infer a grammar that governs the molecular graph via chain-of-thoughts reasoning and preference learning. The grammar can then be used for property prediction or to generate new molecular graphs. This way, the generative model can generate new molecules that are more “interpretable” since the process of generating such molecules is documented in the chain-of-thought traces.
Claims And Evidence: The claims are reasonable and the method itself is well-justified, albeit quite complex, involving many building blocks. I would like to see an ablation study on each building block.
Methods And Evaluation Criteria: The methods and the evaluation criteria do make sense.
Theoretical Claims: N/A.
Experimental Designs Or Analyses: The reported numbers (diversity, synthesizability, validity, and membership) don’t seem to be significant compared to the (much simpler) baselines. Moreover, the presentations of the results are problematic: (i) lower scores are often bolded (e.g. in Tab. 1), and (ii) no error-bars are reported. I am thus wondering if the claims are actually validated by the experiments.
Moreover, the main point of the method: interpretability, lacks validation. It is unclear to me whether the explanations outputted by the model are actually useful. I would strongly suggest an additional study to quantify this.
Supplementary Material: Yes. The prompting template/examples.
Relation To Broader Scientific Literature: The idea of enforcing interpretability in a molecular generation is a great idea. Indeed, one of the current limitations of molecular generative models is the interpretability and synthesizability (or lack thereof) of the novel, generated molecules. So, this paper’s motivation is well-positioned within the literature.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: See above.
Questions For Authors: Please address the issues raised above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: *The claims are reasonable and the method itself is well-justified, albeit quite complex, involving many building blocks. I would like to see an ablation study on each building block.*
* Thanks for acknowledging the reasonableness of our method! We do have an ablation study on each building block in Section 5.1 (Table 4). In the section, we isolate the importance of the MMFM’s role at each stage of the algorithm. We study the effect of ablating the MMFM with a known heuristic, along with a brief rationale. We also have an ablation of the FMG self-optimization in Sec 5.2 (App. C).
*The reported numbers don’t seem to be significant compared to the (much simpler) baselines… presentation of results are problematic: (i) lower scores are often bolded (e.g. in Tab. 1), and (ii) no error-bars are reported.*
* Thank you for pointing out these observations. Evaluating generative models is challenging and requires a holistic consideration of different, competing metrics. A method which scores high on one metric (e.g. synthesizability) but does catastrophically bad on another one (e.g. uniqueness) is not practical. When simple fixes don’t work (e.g., tuning sampling temperature), we make a note of it in the caption and exclude it from the rankings.
* Regarding (i), lower scores. In Tab. 1, the two T5 (I) methods are excluded from ranking (as mentioned in the caption) due to the difficulty in obtaining sufficient valid, unique samples, meaning their seemingly higher RS/Memb. scores don’t have sufficient sample support. The same is written in the caption of Tab. 2 for the VAE (T) methods.
* Regarding (ii), no error bars. In this case, robustness from multiple runs (the purpose of error bars) can simply be absorbed into the number of samples we generate. Generating a sufficiently large sample size can ensure greater statistical significance of the results, especially when 4 of our metrics (valid, novel, RS, memb.) are defined at the individual sample level and the other 2 metrics (diversity, uniqueness) evaluate coverage. For small molecules (Tab. 1), we generate 10000 samples; for large molecules (Tab. 2) we generate 1000 samples due to inference being more expensive for some of the large model baselines. In both cases, we see a fixed sample size of 1000 or 10000 already pushes multiple baselines to the limits, so generating more than what they’re capable of may further unbalance the comparisons (see (i)).
* To facilitate a more holistic validation, we made the following spider plots:
https://ibb.co/k62nj3c8
https://ibb.co/7JK24Y82
* Only the methods that lie on the Pareto frontier across all metrics are plotted.
* On small datasets, no method other than FMG reaches near 100% unique, valid & memb. simultaneously. For instance, GPT4 (ICL) appears to score higher on diversity & RS, but struggles to generate valid and unique samples.
* On real-world datasets, FMG generates the most unique, novel, valid & diverse samples, and methods that score higher on RS and Memb. have serious shortcomings.
* We will add the new plots to the paper to mitigate the challenge of evaluating different generative models by using competing metrics.
*...the main point of the method: interpretability, lacks validation. It is unclear to me whether the explanations outputted by the model are actually useful. I would strongly suggest an additional study to quantify this.*
* The explanation outputs are critical for the evaluation, interpretation and self-improvement of FMG.
* Evaluation: The GPT-generated explanations are assessed by human experts. We actually have 5 in-depth step-by-step walkthroughs in App. D, with 1 example molecule chosen per dataset (domain). Each step in each case study is scrutinized by an expert (see “Comments by Expert” highlighted in light purple).
* For a quantitative assessment, we tallied the GPT4’s turn-by-turn answers across the 5 case studies by whether the expert completely agrees or not. We had the expert classify the difficulty of the task in each turn.
|Dataset||Easy|Medium|Hard|
|-|-|:-:|:-:|:-:|
|Small Dataset|Correct|6|3|0|
||Partial|0|1|0|
|Real-World Dataset|Correct|5|2|2|
||Partial|0|1|0|
* There were no instances where the expert thought GPT4’s explanation was flat out wrong. In all but two instances, the expert completely agrees with GPT4’s explanation.
* Interpretation: Additionally, in App. D.6, we include concluding thoughts delivered by the expert, highlighting LLM agents' strong performance in substructure extraction and limits in harder tasks requiring expert intuition.
* Self-improvement: The explanations directly influence the final grammar by summarizing the reasoning taken to achieve the decomposition. They are the inputs for our LLM-as-a-judge protocol, which pits discrepant decompositions against each other (Sec. 3.5) in a tournament, and decides which decompositions to use on the basis of the explanation quality. See our additional case study in our response to PGBN. | Summary: The paper proposes Foundation Molecular Grammar (FMG) using multi-modal foundation models. FMG induces interpretable graph grammars by converting molecules to images and using LLMs to identify the connection between molecular substructures. It outperforms baselines in molecular generation benchmarks, excelling in synthesizability, diversity, and data efficiency. FMG provides chemical interpretability, offering a new approach for automated molecular discovery workflows.
Claims And Evidence: The claims in the paper are well supported by evidence.
Methods And Evaluation Criteria: The paper uses LLMs to replace heuristics-based rules in molecular grammar methods, which is a novel and interesting way to incorporate domain knowledge into the process of molecular grammar learning. However, the use of rendered molecular images as the input to LLMs, as opposed to text-based molecule representation formats like SMILES or SELFIES, is not sufficiently motivated. The paper should provide a comparison between these molecule representation methods.
Theoretical Claims: The paper does not make theoretical claims.
Experimental Designs Or Analyses: The metrics and datasets used in the experiments are taken from earlier works on similar topics, and there appears to be no significant flaws in the experiments.
Supplementary Material: I reviewed all of the paper’s appendix except the case study, and had a brief view of the case study.
Relation To Broader Scientific Literature: The paper combines LLMs with molecular grammar learning and brings interpretability to LLM-based molecule generation, which can be valuable for molecular discovery tasks.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: Why did the authors use images to describe the molecules to the LLMs? Did the authors experiment with text-based formats like SMILES and SELFIES?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: *The paper uses LLMs to replace heuristics-based rules in molecular grammar methods, which is a novel and interesting way to incorporate domain knowledge into the process of molecular grammar learning.*
Thanks for recognizing the novelty, soundness, and intrigue of our work!
*Why did the authors use images to describe the molecules to the LLMs? Did the authors experiment with text-based formats like SMILES and SELFIES?*
There are three reasons why we choose rendering molecular images as the input to LLMs:
1. Literature review of LLM’s chemistry comprehension abilities:
Foundation models like GPT-4o and Gemini have shown multi-modal comprehension ability in aligning natural language descriptions with corresponding images for advanced reasoning, but there has been less work on aligning formal languages like SMILES with corresponding images. This may be because LLMs find it easier to obtain semantics from natural descriptions rather than SMILES. For instance, studies like [1,Section D] and [2] find LLMs perform great on tasks that require reasoning from the natural description of molecules, but struggle when fed SMILES/SELFIES (e.g. 0% SMILES-to-name success in [1] Section D).
2. Technical formulation is in terms of hypergraphs
Our underlying formulation builds off the history of hyperedge-replacement grammars, which operate at the hyperedge (or substructure) level instead of the atom level. SMILES/SELFIES syntax doesn’t easily support substructure-level annotations and extending it to do so not only is out of the scope of this work but also changes the data used for LLM pre-training. Meanwhile, rdkit has a library of functions for highlighting, color-coding, and rendering substructures.
3. Interpretability of the grammar learning process.
We thought long and hard about how the complex, hierarchical structured representation of hypergraphs can be fed into LLMs. After initial conversations with chemists, we came to the conclusion that highlighting substructures is the most visually meaningful way to consume the information. We, in parallel with a few others [3], identified an opportunity for combining the latest multi-modal understanding and cheminformatics rendering tools.
We hope these points sufficiently motivate the visual representation input, and a summarized discussion will be added to the main text.
[1] White et al. Assessment of chemistry knowledge in large language models that generate code. Digital Discovery, 2(2):368–376, 2023.
[2] Guo et al. What Can Large Language Models Do in Chemistry? A Comprehensive Benchmark on Eight Tasks. NeurIPS, 2023.
[3] Wang, Zixu, et al. "Image-based generation for molecule design with SketchMol." Nature Machine Intelligence (2025): 1-12.
---
Rebuttal Comment 1.1:
Comment: The author answered my questions, but more experiments may be needed to prove the view he put forward. For example, using the textual representation or graph structure representation of molecules as input for processing, then comparing the effect differences of different types of inputs, and finally selecting the optimal data representation form.
---
Reply to Comment 1.1.1:
Comment: Dear uTAc,
Thank you for your thoughtful follow-up and for engaging deeply with our work. To start with, we already benchmark against methods specialized for each input modality—SMILES/SELFIES (MolT5, ChemT5, GPT4-ICL), graphs (MoLeR, RW), and hypergraphs (JT-VAE, Hier-VAE, MHG, DEG). Our results (Tables 1 & 2) and qualitative analyses (see responses to V4pu and PGBN) demonstrate FMG’s superior holistic performance and interpretability.
To further explore your suggestion, we conducted a new ablation study where we modified FMG to use a text-based encoding—FMG-Text—instead of molecular images. Since GPT-4o supports only text and image inputs, direct graph input is currently infeasible, though we see it as a promising future direction.
**Background: MMFM inputs.** Each FMG input is a grid of cells showing:
1. A molecule,
2. A molecule with one substructure highlighted,
3. A molecule with two substructures highlighted in different colors (Sec. 3.4).
While (1) can be replaced by SMILES, (2) and (3) require visual emphasis of substructures, which text-based formats struggle to express.
**Attempt 1: Tagged SMILES**
We added tags (e.g. <>) into SMILES to denote motif boundaries. For example:
Original SMILES: C=CC(=O)OC1CC2CC1C2
Motif: C=CC(=O)O\<C\>1\<CC\>2C\<C\>1\<C\>2
GPT-4o failed basic comprehension tests. Though it understands that numbers denote ring closures, it became confused about which numbers were relevant to the motif’s ring.As a result, it interpreted the tagged string via straightforward character concatenation—e.g., as CCCCC, a straight-chain alkane—rather than recognizing the cyclopentane motif. We attempted to add ring numbers within the tags to clarify the motif's ring structure, but this only obfuscated the global context and further degraded the model’s understanding.
**Attempt 2: Atom-number encoding** We then tried a verbose but reliable approach: number all atoms and list motif atoms. For example:
Original SMILES: [C:1]=[C:2]\[C:3\](=[O:4])[O:5][C:6]1[C:7][C:8]2[C:9][C:10]1[C:11]2
- (optional) Motif 1: 6,7,8,10,11
- (optional) Motif 2: 8,9,10,11
This improved parsing and allowed us to fully re-implement FMG with text inputs (FMG-Text) by swapping each visual cell for its text-based counterpart.
We made sure everything worked as expected on a few case studies. Then, we ran our full evaluation protocol. The downstream generation results are as follows:
**Results: Small Datasets**
|Method|Valid (Avg.)|Unique|||Div.|||RS|||Memb.|||
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|FMG-Text (Best)|100\%|100\%|100\%|100\%|0.73|0.46|0.84|33.1\%|87.1\%|98.5\%|99.6\%|100\%|99.6\%|
|FMG (Best)|100\%|100\%|100\%|100\%|0.73|0.46|**0.85**|**61.7\%**|**93.0\%**|**99.1\%**|99.6\%|100\%|**99.8\%**|
**Results: Real-World Datasets**
|Method|Valid (Avg.)|Unique||Novelty||Div.||RS||Memb.||
|-|-|-|-|-|-|-|-|-|-|-|-|
|FMG-Text (Best)|100\%|100\%|100\%|100\%|91\%|0.92|0.93|69\%|77\%|**43\%**|33\%|
|FMG (Best)|100\%|100\%|100\%|100\%|**92\%**|**0.93**|0.93|**70\%**|**78\%**|38\%|**46\%**|
For completeness, we replicated Sec. 5.1's study for FMG-Text too. Full results: https://ibb.co/4ZpLMywg
**Findings.**
- FMG with text only inputs is a competitive baseline when images cannot be used. Barring FMG, FMG-Text leads in 13/24 columns of Tables 1 & 2.
- FMG with image inputs still performs best overall—especially on diversity and synthesizability, which are crucial for practical molecule generation.
**Technical Note: SMILES/SELFIES compatibility.**
A key modeling consideration is that SMILES/SELFIES are incompatible with our hypergraph formulation, which treats **bonds** (not atoms) as the fundamental units. This ensures that each clique’s atoms (edges) are disjoint, which is essential for our grammar induction and molecule reconstruction.
Because SMILES/SELFIES encode atoms explicitly but treat bonds implicitly, there’s no straightforward way to annotate bonds or define disjoint cliques. Enforcing compatibility would require taking the bond-atom dual of (1) our grammar formulation or (2) the SMILES/SELFIES syntax. This mismatch, discussed in Sec. 3.1, further motivates our use of images, where we can easily highlight constituent bonds.
**Analysis.**
We find that text inputs make small motif identification (membership) easier, while image inputs better support global substructure reasoning, which drives higher synthesizability. This matches known strengths of vision-language models and aligns with your intuition that representation format shapes performance.
We will include these new results in the revised manuscript. In particular, we will incorporate the full FMG-Text results and supporting analyses into the main text, further clarifying how representation format affects performance.
We appreciate your suggestions, which have meaningfully strengthened the paper. We hope this additional analysis provides the clarity you were looking for—and would be deeply grateful if you would consider championing the work! | null | null | null | null | null | null | null | null |
Unifews: You Need Fewer Operations for Efficient Graph Neural Networks | Accept (poster) | Summary: This paper proposes Unifews, a sparsification for both graph and weight matrix. The purpose of such sparsification is to boost the scalability of GNN.
Claims And Evidence: A major claim of this paper is the speed-up on the Ogbn-papers100m dataset, I have some questions about this claim, please refer to latter sections.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I briefly check the theoretical claims, they looks solid.
Experimental Designs Or Analyses: No obvious problem.
Supplementary Material: Yes, mainly the hardware part.
Relation To Broader Scientific Literature: The key contribution is scalability, this is a very active field of GNN.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: S1: The paper is easy to follow.
S2: Theoretical results seem solid.
S3: The literature review is comprehensive.
W1: It would be better to include more baseline. Currently, only graph sparsification methods are used, but since this paper is about scalability, I think other methods should also be included, e.g., other compression method like coarsening/condensation, neighbor sampling methods.
W2: The 100 times acceleration on the ogbn-papers100m is the main contribution of this paper, the authors mention this multiple times in the paper. But the experimental setup of this result is a bit unclear. Please refer to the question section.
Other Comments Or Suggestions: No obvious typo.
Questions For Authors: Q1: About the 100 times acceleration, Table 1 reports 19212s for the graph propagation of SGC on the ogbn-papers100m dataset. It is more than 5 hours. I remember this operation takes roughly 30 minutes on my machine with only CPU, I know there are performance discrepancies among hardwares, but can you double-check your result? It seems too long.
Q2: On the other hand, the acceleration seems conflict with the "Complexity Analysis" section in Line 294? It says the computation complexity is reduced by $O(1-\eta_{\alpha})$, and the $\eta_{\alpha}$ is set to 50% in Table 1, so there should be 2 times of acceleration? This matches the results on all other datasets, but only not ogbn-papers100m. Please kindly correct me if I am missing something.
Q3: DSpar is not used as a baseline in Table 1, could you elaborate why?
Q4: Can you try other scalability method, like GraphSAGE with neighbor sampling, graphsaint and clustergcn on ogbn-papers100m and report the runtime and accuracy? These are all classical scalability methods and their code should be easy to find.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful feedback. We have carefully checked the experiments and would like to address the questions as follows.
## W1&Q4
We further evaluate other scalability methods and present representative accuracy and training time as below. We also evaluate the smaller `cora` for comparison. Notably, **only `GraphSAGE` is applicable** to `papers100M` through mini-batch training, while other methods incur OOM error. Methods such as `GraphSAINT` and `ClusterGCN` entail longer time for processing the graph data and exhibit weaker scalability on larger graphs.
### Mini-batch `cora`
| Edge ratio | 30% | 50% | 80% | 90% |
|------------|-----|-----|-----|-----|
| `DropEdge` Acc (%) | 86.4 | 86.8 | 79.5 | 43.7 |
| `DropEdge` Time (s) | 7.13 | 7.31 | 7.09 | 6.87 |
| `GraphSAGE` Acc (%) | 77.6 | 78.2 | 79.4 | 78.7 |
| `GraphSAGE` Time (s) | 2.00 | 1.80 | 1.86 | 1.83 |
| `GraphSAINT` Acc (%) | 74.7 | 78.1 | 77.8 | 77.2 |
| `GraphSAINT` Time (s) | 228.02 | 167.29 | 179.58 | 193.31 |
| `ClusterGCN` Acc (%) | 78.3 | 76.9 | 77.1 | 74.3 |
| `ClusterGCN` Time (s) | 4.50 | 4.20 | 3.47 | 2.66 |
### Mini-batch `papers100M`
| Edge ratio | 30% | 50% | 80% | 90% |
|------------|-----|-----|-----|-----|
| `GraphSAGE` Acc (%) | 46.25 | 48.45 | 41.02 | 42.64 |
| `GraphSAGE` Time (s) | 18208 | 13662 | 6565 | 3850 |
We would like to highlight the difference between the settings and our evaluation in the paper as follows:
1. The training is conducted in **mini-batch** with batch size=100K. Testing is performed in full-graph manner on CPU due to the excessive graph size.
2. The "edge ratio" is calculated by $\eta=1-m'/m$, where $m'$ is the total number of sampled neighbors used in propagation, and $m$ is the edge size in the raw graph. This concept is different from edge sparsity since the **propagation graph is not static**.
3. As elaborated in *Sec A.2*, the graph structure used for coarsening and sampling methods is dynamically determined during training, which limits their wider generality. For example, they cannot be described by the graph smoothing framework, and **can hardly apply to decoupled models**.
## W2&Q1
We elaborate the setups of the `papers100m` result in *Tab 1*:
1. We find that the previous result of `SGC` is largely affected by the "**cold start**" issue, where the first run is significantly slower than the others due to the overhead in accessing and loading the data. By excluding the first run, the average time is around 15K seconds, which is close to `APPNP` and `GraphSAGE`.
2. To ensure comparable results with baselines, the decoupled propagation of all methods in Tab 1 is implemented by **single-thread C++ computation** that successively performs message-passing for each edge. The process may be slower than the common implementation based on matrix computation, such as Eigen in C++ and PyTorch/Numpy in Python.
3. The wall-clock time also includes auxiliary operations within the propagation, such as **accessing the storage of attributes and neighboring nodes**. Our implementation stores edges in the format of an adjacency list, which is tailored for the pruning operation. However, it may be slower than the matrix format when the edge scale is large, and results in a greater scalability issue for the full `papers100m`.
In practice, both the baseline and Unifews computation can be accelerated by approaches such as multi-threading, vectorization, and enhanced data structures. However, the relative speedup can still be achieved considering the FLOPs reduction.
## Q2
The acceleration ratio of wall-clock time is also affected by several factors beside the complexity analysis:
1. As suggested in *Sec 4.3*, the complexity provides a lower bound for computation reduction. As the **edges removal is accumulated** across hops, the actual reduction rate can be larger than the theoretical bound. In fact, Unifews achieves $3.3\times$ FLOPs reduction compared to SGC, which is greater than the expected $2\times$.
2. Evaluation in *Fig 7* shows the **improvement of embedding sparsity** brought by Unifews pruning. This effect is more significant for sparse graphs such as `papers100m`, since the magnitude of node embedding is more likely to diminish when edges are pruned. The improved sparsity further facilitates pruning in subsequent layers and leads to faster computation.
3. As elaborated in *Q1(3)*, the overhead of **auxiliary graph operations** is more sensitive to the graph scale. For example, finding a neighbor of a node is faster if the edge set is smaller. This may result in speedup greater than the linear improvement.
## Q3
In *Tab 1*, we only present the results of approaches applicable to **decoupled models** with the separated graph propagation in C++. Since the official implementation of `DSpar` is based on the PyG framework for iterative models, we are unable to apply it to the decoupled propagation. | Summary: This paper introduces UNIFEWS a technique to sparsify GNNs by dropping messages. The authors justify their UNIFEW by proofs that provide theoretical guarantees. In experiments, UNIFEWS allows dropping almost all edges in the graphs without much impact on predictive performance. This leads to a significant speed-up by a factor of up to x100.
## update after rebuttal
The authors have addressed many concerns effectively in the rebuttal. While I still believe the work primarily demonstrates that graph structure is often unimportant for many datasets, the authors' contributions are nonetheless meaningful and well-supported. My recommendation lies between a weak accept and an accept; given the options, I lean toward recommending acceptance.
Claims And Evidence: The authors provide convince evidence in the form of proofs and experimentts.
Methods And Evaluation Criteria: See (W2).
Theoretical Claims: I did not check the proofs.
Experimental Designs Or Analyses: The experimental design seems sound.
Supplementary Material: I skimmed the proofs and the "Additional Experiments" section.
Relation To Broader Scientific Literature: Related work is good.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: **Strengths:**
- (S1)The proposed method is intuitive and conceptually simple: it is clear that dropping unimportant messages will lead to a speed-up at little cost to predictive performance (the hard part is chosing which messages to drop).
- (S2) The proposed method is backed by theoretical arguments:
- Theorem 4.1 gives us a bound on the spectral similarity between sparsified and non-sparsified laplacian
- For decoupled architectures, UNIFEWS allows for an $\epsilon$ approximation of graph smoothing (Proposition 4.2)
- (S3) The proposed method allows to drop almost all edges without losing predictive performance (Figure 4).
**Weaknesses:**
- Speed Analysis (Section 5.2) is done only for decoupled models and not for iterative models. Since decoupled models are more niche than iterative models, this weakens the paper. This is especial crucial as Fig. 6 shows that for iterative models the propagation step is not important, implying that UNIFEWS might lead to only little speedup on iterative models.
- (W2) I am not convinced that the strong predictive performance in the low edge regime (S2) is truly caused by UNIFEWS. After all, achieving the same accuracy with 1% with just edges should simply mean that the graph structure is uninformative. In this case, it is not clear what the true utility of UNIFEWS is. This could be combated, by testing UNIFEWS on more diverse datasets to see how it behaves for problems where the graph structure is important
**Overall,** I think that this is a good paper that is held back by some small issues (mainly W2). Thus I vote for weak accept.
Other Comments Or Suggestions: - Please use `\mid` in your definition of neighborhood instead of | ($\{ a | ...\}$ vs $\{ a \mid ...\}$)
- "in an one-time", maybe "a one-time"?
- Theorem 4.1 could be formulated better, it is not mentioned how the sparsified laplacian is obtained which is the crucial part of the theorem.
Questions For Authors: - The operation performed by UNIFEWS simply thresholds out messages (=vectors) whose magnitude is smaller than some $\delta$. This operation is by default non-differentiable. Could this cause problems? Might it improve to make this process differentiable during training?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are thankful for the detailed and insightful comments from the reviewer. We address the specific reviews below with further evaluation results.
## W1
In the paper, we do not show the wall-clock time for iterative models mainly because of the **variety in baseline implementations**. The existing approaches vary greatly in terms of GNN frameworks and the implementation of sparsification. For example, `GLT` applies masks on the adjacency matrix, while `CGP` utilizes learnable edge weights stored in variable tensors. As a result, the wall-clock time among different models is hardly comparable in a fair manner. We hence employ FLOPs to measure the expected computation cost, which is also widely used in related studies.
Furthermore, the FLOPs measured in *Fig 6* of different operations are not equivalent to wall-clock times. This is because the **feature transformation is performed on GPU** with highly efficient dense-dense matrix-matrix multiplication (MM), while the graph propagation as sparse-dense matrix-matrix multiplication (SPMM) is not applicable to most acceleration techniques. In fact, the adjacency matrix is usually loaded on CPU, rendering the propagation a complicated cross-device and irregular computation, which is usually considered as the bottleneck of GNN efficiency.
We present the inference time breakdown of Unifews for `GCN+arxiv` in the following table. It can be observed that both operations benefit from the sparsification.
### `GCN+arxiv`
| Edge Sparsity | Weight Sparsity | Graph Propagation (s) | Feature Transform (s) |
| --- | --- | --- | --- |
| 0% | 0% | 5.85 | 0.58 |
| 50% | 0% | 3.54 | 0.61 |
| 0% | 50% | 5.54 | 0.36 |
| 50% | 50% | 3.66 | 0.36 |
In summary, we believe achieving practical speedup is largely an *implementation* problem. There is a series of works [Deng et al., 2020a] developing software and hardware systems and achieving real speed improvement, which is orthogonal to our contribution on how to achieve such sparsity by *algorithmic* design.
## W2
While approaching 100% sparsity, Unifews still **preserves certain graph structural information**, which is critical to model performance. For comparison, we present the result of `MLP` with no graph information as below. It can be observed that the performance improvement is significant on `cora` and `citeseer`.
### `MLP` ($L=2$)
| Dataset | `cora` | `citeseer` | `pubmed` |
| --- | --- | --- | --- |
| Acc (%) | 75.2 | 71.7 | 87.0 |
We would like to discuss the difference of Unifews with MLP at high sparsity in the following aspects:
1. Given the large number of edge/weight entries, a number of entries remain even when the pruning threshold is high. As a result, the pruning ratio rarely reaches exactly 100% in our experiments. This phenomenon has also been observed in previous NN and GNN pruning studies [Liu et al., 2023a], where maintaining **a small fraction (0.1%-1%) of edges or weights can largely preserve GNN performance**.
2. As discussed in *Fig 7*, Unifews brings higher sparsity to the learned representation, which is known to be **beneficial for model learning**. This improvement is observed for both edge and weight sparsification in previous works such as [You et al., 2022; Chen et al., 2021]. As the sparsity is progressively acquired during training iterations, the model benefits from learning on perturbed variants of the data, leading to improved performance.
3. Due to the implementation of acquiring the **normalized adjacency $\tilde{A}$** in PyG, the diagonal entries are $D^{-1}$ instead of $I$. Consequently, when the edge sparsity is close to 100%, each graph propagation can be viewed as a normalization to node features based on degrees, which is different from MLP. This process also incorporates graph structural information and may also contribute to better performance.
## C1-3
We thank the reviewer for pointing out the issues. We will carefully fix the typos in the revised version.
The sparsified $\hat{L}$ corresponds to the pruned edge set $\hat{\mathcal{E}}$ acquired by Unifews as in *Lemma 3.3* and *Lemma B.1*. We will improve the formulation in the revised version.
## Q1
The pruning process itself is **not differentiable**, similar to conventional neural network pruning [Han et al, 2015]. Intuitively, pruning can be implemented by applying a 0-1 mask to the target matrix every time it is used during forward inference and backward propagation. Gradients of the pruned entries (i.e., with zero values) are naturally kept as zero. Hence, the pruning process **does not affect normal model training**.
It is possible to augment the pruning to be differentiable, or even adaptive during training. Works such as AdaptiveGCN, SGCN, and Shadow-GNN discussed in *Sec A.1* are similar to this idea. However, the process may **incur additional overhead** for learning these variables. Hence, in this paper, we mainly use the simple static strategy to ensure efficiency and align with our theoretical analysis.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. I think you raise some good points. However, I remain unconvinced whether your results are not simply showing that GNNs are the wrong tool for the given datasets. Especially, W2.3 is an interesting direction that might be worth investigating in the future. I chose to maintain my score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the feedback. To further investigate the performance with high sparsity and with diagonal entries, we present ablation studies comparing MLP, unpruned GCN, and Unifews with different diagonal schemes. We also extend to the heterophilic datasets `cornell` and `wisconsin` [R1], where the graph information is known to be malignant to non-heterophilic models such as GCN. In the following table, names in the format of `Unifews-99-0` denote Unifews with edge sparsity 99% and weight sparsity 0%, while `Unifews(0)` and `Unifews(1)` refer to the variants by setting the diagonal entries of the adjacency matrix to 0 and 1, respectively. Results better than the unpruned GCN are highlighted in **bold**. We mainly draw the following conclusions:
1. The training process of `Unifews` is similar to `GCN`, showing similar patterns of performance improvement or degradation compared to `MLP`. This indicates that even when a large portion of the edges is pruned, the graph structure can still be gradually learned during the training iterations through message passing based on the remaining edges.
2. The diagonal entries, i.e., using $A+I$ instead of $A$ to represent the graph structure, are critical for model performance, which is consistent with the GCN paper [Kipf & Welling, 2017]. Hence, `Unifews(0)` usually performs worse and is particularly poor with high edge sparsity.
3. The effect of the diagonal entries can be viewed as **amplifying the inductive bias from graph information** when training with the pruned graph. If the graph structure is benign (`cora` and `citeseer`), pruning while keeping the diagonal entries may further improve the performance. Conversely, keeping the diagonal entries further decreases the accuracy under heterophily (`cornell` and `wisconsin`).
4. Whether `Unifews` (diagonal $D^{-1}$) or `Unifews(1)` (diagonal $I$) is better depends on the specific dataset. We hence use the former one to be consistent with the PyG GCN implementation.
| Dataset | `cora` | `citeseer` | `pubmed` | `cornell` | `wisconsin` |
|-|-|-|-|-|-|
| `MLP` | 75.2 | 71.7 | 87.0 | 73.0 | 80.4 |
| `GCN` | 88.3 | 74.9 | 88.8 | 59.5 | 64.7 |
| `Unifews-50-0` | **89.3** | **76.0** | 87.9 | 59.5 | 56.9 |
| `Unifews(0)-50-0` | 86.5 | 71.6 | 83.8 | 59.4 | 51.0 |
| `Unifews(1)-50-0 `| **88.4** | **75.1** | 88.0 | 59.5 | 56.9 |
| `Unifews-99-0` | **89.1** | **76.0** | 88.1 | 43.2 | 52.9 |
| `Unifews(0)-99-0` | 45.5 | 30.5 | 49.8 | 43.2 | 52.9 |
| `Unifews(1)-99-0` | **88.9** | **75.5** | 88.5 | 40.5 | 52.9 |
[R1] Geom-GCN: Geometric Graph Convolutional Networks. ICLR'20. | Summary: This paper explores strategies to accelerate GNN computation by integrating both structural sparsification and weight parameter pruning. Specifically, it introduces a framework called UNIFEWS, which adaptively and progressively simplifies computations while providing theoretical guarantees on the accuracy tradeoff. Experimental results validate the proposed approach.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: For Theorem 4.1, I have not verified the proof; however, it is crucial to include synthetic examples illustrating the bounds. This would help readers better understand their behavior, especially since the ranges of the constants in the theorem are not provided.
Experimental Designs Or Analyses: I find the experimental section to be quite comprehensive and did not notice any major shortcomings, except for a few questions, which I have outlined in the "Questions" section below.
Supplementary Material: I did not check that.
Relation To Broader Scientific Literature: The broader connection of this paper to general scientific discovery is not immediately clear. However, the proposed efficiency improvements contribute to the wider deployment of GNNs on larger-scale datasets and applications, which is certainly a valuable advantage.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: ## Strengths
1. The paper is well-presented with excellent readability, supported by clear diagrams and algorithmic aids for better clarification.
2. The distinction between iterative GNNs and decoupled GNNs enhances the generality of the proposed approach.
3. A complexity analysis of the proposed method(s) is provided.
## Weaknesses
1. DropEdge [Rong et al., 2020] should be included as a baseline in the comparisons for iterative sparsification.
The variance of numerical metrics should be reported in all plots and tables.
2. Please check my questions below.
Other Comments Or Suggestions: Please see my questions below.
Questions For Authors: ## Questions
1. In the experiments, the backbone GNNs are all set to 2 layers, effectively avoiding the over-smoothing problem. This suggests that the improvements from sparsification may stem from factors other than mitigating over-smoothing. What are your thoughts on this? What other possible explanations could there be?
2. The results in Figures 3 and 5 are somewhat difficult to interpret when weight/edge sparsity approaches 100%, as the performance even improves compared to the corresponding backbones. Wouldn’t a 100% edge sparsity reduce the model to an MLP, making performance entirely dependent on node features? Similarly, when weight sparsity nears 100%, no node feature information should be preserved—so how is high node classification accuracy still achieved?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the constructive feedback from the reviewer. Below, we provide detailed responses with new experiments following the suggestions.
## T1
The range of the constants in *Thm 4.1* is discussed in *Sec B.2*. The constant $2<\alpha<3$ represents degree distribution, and $\sigma>0$ is the standard deviation of feature distribution. As $t=1/(\alpha-1)$, there is $0.5<t<1$. The constant $C>0$ is related to the norm of the embedding $\|p\|$, as it maps the sparsity ratio $0\le\eta\le 1$ to the actual threshold value applied to the embedding.
On **realistic datasets**, we perform evaluation in *Fig 12*, where the blue line shows that by varying the threshold $\delta_a$ (x-axis), the acquired sparsity $\eta_a$ (right y-axis) aligns well with the relationship in *Thm 4.1* with two dataset-specific constants.
We further utilize **GenCAT** [R1,R2] to generate synthetic graphs with randomized connections and features by varying its parameters. Then, we evaluate the relationship between the edge threshold $\delta_a$ and the edge sparsity $\eta_a$ following the settings of Fig 12. The results are available in: https://anonymous.4open.science/r/Unifews-A91B/plot.pdf . As an overview, the pattern is similar to the one in Fig 12, while the constants are effectively affected by the changes of edge and feature distribution in the GenCAT graph.
[R1] GenCAT: Generating attributed graphs with controlled relationships between classes, attributes, and topology. Information Systems'23.
[R2] Beyond Real-world Benchmark Datasets: An Empirical Study of Node Classification with GNNs. NeurIPS'22.
## W1
We present the results of `DropEdge` for `GCN+cora` as follows. As mentioned in the DropEdge paper, its scheme is **different from graph sparsification** since: (1) its edge removal is performed randomly at each training time; (2) the dropped edges are not accumulated throughout propagation layers, which differs from Unifews. In comparison, the `Random` baseline used in our experiments is closer to Unifews regarding the two aspects above, while using random edge removal. Hence, we mainly use the `Random` baseline in our experiments.
|Edge Sparsity|30%|50%|80%|90%|
|--|--|--|--|--|
|`DropEdge`|86.4|86.8|79.5|43.7|
|`Random`|85.5|86.3|85.0|80.8|
Empirically, the **variance** of Unifews is within the range of 1-3%, which is slightly larger than the backbone model due to the perturbation of edge and weight sparsification. We will include the variance in the revised paper.
## Q1
For iterative models, Unifews removes insignificant entries and improves representation sparsity as shown in *Fig 7*. The **increased sparsity** is known to be beneficial for model learning, as revealed by [Han et al., 2015] for general neural networks, and similarly evaluated for GNNs in [You et al., 2022; Chen et al., 2021]. In brief, it is believed that sparsification can be viewed as a form of perturbation during learning. By enhancing sparsity, the model can strengthen the useful neural connections within network weights and focus on the most informative features, leading to improved performance.
Regarding **over-smoothing**, we further study the particular effect with the iterative `GCNII` of 32 layers in Appendix *Fig 11*, and with the decoupled `SGC` of 5-80 layers in *Fig 14*. Both results demonstrate that, by increasing model layers, the accuracy of the pruned model is largely preserved thanks to the residual connections. Hence, we conclude that Unifews pruning also benefits accuracy by alleviating over-smoothing.
## Q2
The performance near 100% sparsity is affected by various factors. We would like to compare it to MLP in the following aspects:
1. Given the large number of edge/weight entries, a number of entries remain even when the pruning threshold is high. As a result, the pruning ratio rarely reaches exactly 100% in our experiments. This phenomenon has also been observed in previous NN and GNN pruning studies [Liu et al., 2023a], where maintaining **a small fraction (0.1%-1%) of edges or weights can largely preserve performance**.
2. As elaborated in *Q1*, **higher sparsity** induced by Unifews pruning can inherently enhance model performance. During training iterations where pruning is progressively applied, the model benefits from learning on perturbed variants of the data, leading to improved performance. This improvement is observed in both edge and weight sparsification.
3. Due to the implementation of acquiring the **normalized adjacency $\tilde{A}$** in PyG (`torch_geometric.nn.conv.gcn_conv.gcn_norm`), the diagonal entries are $D^{-1}$ instead of $I$. Consequently, when the edge sparsity is close to 100%, each graph propagation can be viewed as a normalization to node features based on degrees, which is different from MLP. This process incorporates graph structural information and may also contribute to better performance. We will revise *Alg 1* to reflect this distinction explicitly.
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thank you for your rebuttal. I find the additional experiments and explanations quite helpful, and the comparisons with DropEdge provide valuable insights. I will keep my score, since it's already positive. However, regarding Q2.3, if $A$ is already $\mathbf{0}$, how do the aggregation and propagation operators retain any structural information? I might missed something, or please verify and clarify this part accordingly.
Best,
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the comments. We would like to elaborate on the scheme of diagonal entries in the adjacency matrix with more details. As in Sec 2.1, we use the **self-looped adjacency matrix $\bar{A} = A + I$** to represent the graph structure. The diagonal entries can be regarded as residual connections for keeping the node features during propagation. During Unifews pruning, the diagonal entries are naturally preserved, which is represented by line 3 in Alg 1: $\hat{P}\_{(l+1)} \gets \hat{P}_{(l)}$. These entries are excluded in sparsity calculation, since they do not cost additional computation.
In Q2.3, we intend to elaborate that, due to the graph normalization implementation, the actual propagation matrix used in `Unifews` for iterative GCN is $\tilde{A} = D^{-1/2}(A + I)D^{-1/2}$. In this case, Alg 1 line 3 should be modified to $\hat{P}\_{(l+1)} \gets D^{-1}\hat{P}_{(l)}$, which still preserves certain structural information when the rest of $A$ is pruned to 0.
Another possible alternative is setting diagonal entries outside normalization: $\tilde{A} = D^{-1/2}A D^{-1/2} + I$ (denoted as `Unifews(1)`), which is equivalent to MLP when $A=O$. We conduct additional experiments to compare the two schemes in the reply to *Reviewer 9jfo*. In summary, the results imply that the diagonal entries, along with other edges, can **amplify** (not necessarily **improve**) the inductive bias from graph information during Unifews training. Hence, the improvement over the unpruned backbone is possibly a special case on certain datasets. | Summary: The paper proposes a framework named UNIFEWS (UNIFied Entry-Wise Sparsification), which aims to improve the learning efficiency of Graph Neural Networks (GNNs) by jointly sparsifying the graph and the weight matrix. By incrementally increasing sparsity layer by layer, the framework significantly reduces the computational operations in GNNs without notably compromising model accuracy. Theoretically, the authors establish a new framework to characterize the learning of sparsified GNNs and prove that UNIFEWS can effectively approximate the learning objective within bounded error. Experiments show that UNIFEWS achieves efficiency improvements on multiple datasets, reducing matrix operations by 10 to 20 times and accelerating computations by up to 100 times for graphs with billions of edges.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes. All experimental frameworks were rigorously validated through specific statistical method, and cross-verified against benchmark datasets, documented in Evaluation Section and Supplementary Materials.
Supplementary Material: Yes. The parts include the additional appendix on Detailed Proof and Theoretical Analysis, etc.
Relation To Broader Scientific Literature: This study makes a meaningful contribution to the existing body of knowledge in the related literature.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strength:
* A novel joint sparsification technique is proposed, which unifies the operations of the graph and the weight matrix and establishes a theoretical connection between the graph optimization process and sparsification. This innovative perspective provides new ideas for the optimization of GNNs and fills the theoretical gap in existing research regarding the joint sparsification of graphs and weights.
* The concept of ϵ-spectral similarity is introduced, and its effectiveness in the spectral domain is theoretically demonstrated for UNIFEWS. This provides a more rigorous theoretical guarantee for the sparsification of GNNs.
* It demonstrates significant efficiency improvements on large-scale graph data, especially achieving a 100-fold acceleration on graphs with billions of edges. This is of great significance for processing large-scale graph data and can effectively alleviate the bottleneck issues of existing GNNs in terms of computational resources and time.
* By incrementally increasing sparsity layer by layer, UNIFEWS progressively reduces the computational load in multi-layer GNNs, further enhancing the scalability of the model.
Weakness:
* Although the paper proposes a theoretical framework to analyze the impact of sparsification on GNN learning, its theoretical analysis is based on several assumptions, such as the distribution of the graph (e.g., power-law distribution) and the Gaussian distribution of input features. These assumptions may not always hold in practical applications, which could lead to certain discrepancies between the theoretical results and the actual performance. Please further clarify the rationality.
* The performance of UNIFEWS depends on the choice of sparsification thresholds (δa and δw). However, the authors do not provide a systematic method for automatically selecting these hyperparameters, instead relying on manual tuning or empirically based choices. This may make it difficult to find the optimal combination of hyperparameters in practical applications, thereby affecting the performance of the model.
Other Comments Or Suggestions: The paper does not discuss the limitations of the proposed method.
Questions For Authors: Please refer to the Other Strengths And Weaknesses section for detailed inquiries.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We are thankful to the reviewer for recognizing our theoretical and experimental contributions. We respectfully address the specific reviews below.
## W1
**Power-law** for degree distributions is frequently observed in real-world graphs, especially large-scale ones focused on in this study, such as social networks, citation networks, and web graphs [R1]. We present an empirical evaluation in *Fig 8(a)*, where the blue bars show the distribution of inversion of edge degrees on Cora. It can be observed that edges with larger magnitudes (i.e., smaller degrees) exhibit higher relative density, while only a small portion of edges have magnitudes close to 0 (i.e., nodes with large degrees).
The assumption of the **Gaussian distribution** can apply to node attributes or output features of linear transformations, depending on the exact input of iterative or decoupled models. The Gaussian distribution is commonly used for depicting the feature distribution of neural networks as an extension of the central limit theorem. For example, text embeddings are commonly used as node attributes on text-attributed graphs, which can be regarded as multivariate Gaussian distributions. The similar assumption is commonly employed in graph generation and GNN research [R2-R5].
We will clarify the rationale when deriving Thm 4.1 in the revised version.
In *Reviewer t1S7 T1*, we further present a new empirical evaluation regarding these two assumptions on synthetic graphs generated by GenCAT.
[R1] Power-law distributions in empirical data. SIAM Rev'09.
[R2] Contextual Stochastic Block Models. NeurIPS'18.
[R3] Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking. ICLR'18.
[R4] Distribution Knowledge Embedding for Graph Pooling. TKDE'22.
[R5] GenCAT: Generating attributed graphs with controlled relationships between classes, attributes, and topology. Information Systems'23.
## W2
We mainly utilize *Thm 4.1* for determining the **edge threshold $\delta_a$**. In realistic applications for GNN sparsification, the sparsity $\eta_a$ is usually given by users. Then, as indicated by Thm 4.1, $\delta_a$ is monotonically related to $\eta_a$ with two constants determined by the dataset and model aggregation. To set the threshold for a model-dataset pair, only a few trial epochs under different $\delta_a$ are needed to fit the actual curve between $\delta_a$ and $\eta_a$ and decide the desired value of $\delta_a$.
Empirical evaluation in *Fig 12* (blue line) shows that the relationship between $\eta_a$ (right y-axis) and $\delta_a$ (x-axis) aligns well with the theory. In fact, for a large range of sparsity (empirically 10%-90%), the relationship is almost linear (note that Fig 12 is drawn with a logarithmic x-axis), which further simplifies the selection.
The **weight threshold $\delta_w$** can be similarly determined by fitting empirical trials when additionally assuming a Gaussian distribution for the weight matrix. This process is identical to conventional neural network pruning in practice, such as [Han et al, 2015].
## C1
One potential limitation and future direction lies in the consideration of graph heterophily, as the current strategy prunes insignificant messages. However, under heterophily, where connected nodes have dissimilar labels, messages with large magnitudes may not be beneficial to model inference. In this case, the graph smoothing objective *Def 3.1* needs to be refined, and consequently, the sparsification strategy should be adjusted. While there are recent works on the spectral optimization process for heterophilic graphs, how to apply sparsification is largely unexplored. | null | null | null | null | null | null |
PCEvolve: Private Contrastive Evolution for Synthetic Dataset Generation via Few-Shot Private Data and Generative APIs | Accept (spotlight poster) | Summary: The paper addresses the problem of generating Differentially Private (DP) synthetic images using APIs, focusing on the setting in which only a small number of private data samples are available.
The authors observe that a popular prior work, Private Evolution, struggles in few-shot private data scenarios due to limitations in its DP-protected similarity voting approach. This is because with few-shot private data, the noise added for DP overwhelms the actual votes, leading to nearly random similarity voting and selection.
To address this, the authors propose Private Contrastive Evolution (PCE). PCE iteratively mines inter-class contrastive relationships in the few-shot private data and integrates them into an adapted Exponential Mechanism (EM) to directly select candidates rather than vote.
PCE includes several key components:
- A contrastive filter to improve the class-discriminability of synthetic data.
- A similarity calibrator on top of the contrastive filter to maximize the selection probability of the most similar synthetic data.
- A score function with sensitivity 1 based on the similarity calibrator / contrastive filter
Then, using the score function above, a synthetic dataset is generated from the exponential mechanism. This process is then repeated in an evolutionary loop.
The authors conduct experiments on four specialized datasets from healthcare and industry domains, demonstrating that PCE outperforms PE and other API-assisted baselines. They also analyze PCE's properties, including synthetic image quality, component effectiveness, and hyperparameter influence.
The experimental code is provided in an anonymous repo.
-----
## update after rebuttal
I am positive about the paper and the rebuttals have reinforced my stance.
Claims And Evidence: The claims are supported by empirical evidence.
Methods And Evaluation Criteria: The proposed methods and/or evaluation criteria seem reasonable.
Theoretical Claims: I verified the proof of privacy, which is straightforward.
Experimental Designs Or Analyses: I checked the experimental designs/analyses.
My only concern is whether the data used to pre-train the backbone model has overlap with the sensitive datasets.
Supplementary Material: I did not review the supplementary material since there are no proofs.
Relation To Broader Scientific Literature: The key contribution is a variation of private evolution for generating private synthetic image data in the few-shot learning case. Previous methods ignored inter-class relationships and just generated data for each class in parallel based on similarity scores between the synthetic data and private data. By applying a contrastive filter before the similarity scores, this work is able to better leverage inter-class information.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: 1. It seems that it is possible to generate as many images as desired after the final iteration of PCE, simply by using an i2i model repeatedly on the synthetic dataset. Is my understanding correct?
If so, does generating more synthetic images improve downstream classification? If this is the case, it could be highlighted as a further benefit of PCE that we can generate large synthetic datasets from only a small number of private data points.
1. How is the performance of PCE on large image datasets with a small number of images per class (e.g. CIFAR-100)? It seems like the contrastive filter should improve upon prior works (e.g. private evolution), which essentially ignore the inter-class relationship.
1. Is there a specific reason to consider pure-DP and not approx-DP? The privacy analysis of PCE seems to be a simple composition over iterations/classes, and using strong composition under approx-DP should lead to stronger results.
1. Which dataset(s) is the pre-trained backbone model trained on? Could there be an overlap with the sensitive datasets?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and effort in reviewing our work! Below, we provide responses to address your concerns and suggestions, with *Lines xxx* and *Section x* referring to specific parts of our paper. We hope these clarifications effectively resolve your concerns, and we also thank you for your constructive feedback!
**Concern 1: Could the pre-trained dataset of a generative model overlap with sensitive datasets?**
We verified that the following pre-trained datasets of our generative models do not overlap with our private few-shot data by cross-referencing unique identifiers and metadata, and using hash comparisons. Specifically:
- Stable Diffusion (SD) is pre-trained on the public multi-modal dataset LAION-2B [1].
- SD+IPA is pre-trained on the same LAION-2B dataset, supplemented with the public multi-modal dataset [COYO-700M](https://github.com/kakaobrain/coyo-dataset).
- OpenJourney is pre-trained on LAION-2B and an additional set of over 100K images from [Midjourney v4](https://www.midjourney.com/home) (a commercial image generative model API).
Reasonably, the images we used to construct the few-shot specialized domains (medical and industrial images) are originally images without corresponding textual data and, therefore, cannot be added to these multi-modal datasets.
[1] Schuhmann, Christoph, et al. "Laion-5b: An open large-scale dataset for training next generation image-text models." NeurIPS 2022.
**Concern 2: Is it correct that we can generate as many images as we want after the final iteration of PCE? If so, does generating more synthetic images enhance downstream classification performance?**
- The answer to the first question is **YES**. We can generate as many images as needed using PCE.
- The answer to the second question is **NO**. We have demonstrated this in *Section CAS w.r.t. N-Shot Synthetic Data* and *Figure 4*, where we show that the utility of the synthetic dataset decreases when generating more than 150 images per class, given noisy data generation APIs, a fixed privacy budget ϵ*, and cost in few-shot private data scenarios. Specifically:
1. **Noise**: As the number of synthetic images increases, the valuable information plateaus due to the limited few-shot private data, while noise continues to increase. Since current generative models cannot produce content free of noise, finding a balance is essential to optimize the utility of the synthetic dataset.
2. **Privacy**: PCE's privacy protection relies on differential privacy, constrained by a given privacy budget ϵ*. Each PCE iteration consumes a portion of the privacy budget, so PCE will stop once the total privacy budget exceeds ϵ*, meaning that PCE cannot run indefinitely. Thus, the total number of synthetic images is finite when the number of synthetic images is fixed in each PCE iteration.
3. **Cost**: Each image generation requires a request to the image generation API service, which incurs a corresponding cost. If cost is not an issue, it is possible to generate as many images as needed in each PCE iteration.
**Suggestion 1: Performance on a Private Dataset with More Classes but Fewer Images per Class to Further Show the Advantage of the Contrastive Filter**
We thank you for the suggestion to consider a private dataset with more classes to highlight the advantage of our contrastive filter in our PCE.
- **Private datasets in specialized domains typically have only a few classes**, such as positive vs. negative, which is common in fields like medicine. The few-shot issue is particularly prevalent in these specialized domains. After a thorough survey of the literature, we selected four representative specialized datasets used in our paper, each of which has a limited number of classes.
- **We have included the Cifar100 dataset** in R-Table 1 with other settings fixed, where our PCE outperforms PE by 10.46% in accuracy. This improvement is greater than that observed with existing datasets in *Table 1*, demonstrating the advantage and adaptability of our contrastive filter, even though Cifar100 is not from a specialized domain.
**R-Table 1: Top-1 accuracy (%) on Cifar100 (10 private samples per class)**
||Accuracy (%)|
|-|:-:|
PE|24.23|
Our PCE|**34.69**|
**Suggestion 2: Consider Using Approx-DP Instead of Pure-DP for Stronger Composition and Improved Results**
We thank you for the suggestion of considering approx-DP to further improve our PCE.
While stronger privacy budget compositions for approx-DP may exist, we chose to use the same composition method as the baseline PE for a fair comparison when calculating the total privacy budget. Using pure-DP in PCE ensures that utility improvements are attributable solely to our methodological innovations rather than differences in privacy accounting. Additionally, pure-DP can provide stronger differential privacy guarantees than approx-DP under the same $\epsilon$. Improving PCE with approx-DP could be explored in the future. | Summary: This paper proposes Privacy Contrastive Evolution (PCE), a new algorithm for generating high-quality differentially private (DP) synthetic images from small amounts of private data using a generative API. PCE addresses the limitations of existing Privacy Evolution (PE) algorithms, which struggle with high noise levels in small datasets. PCE introduces a contrastive filter to enhance class discrimination and an adaptive exponential mechanism with a similarity calibrator to improve the quality of selection under DP constraints. Experiments on four professional datasets (e.g., medical and industrial) show that PCE outperforms PE and other baselines, achieving up to 5.44% accuracy improvement on downstream tasks while maintaining strong privacy guarantees.
Claims And Evidence: The paper's key claims are generally supported by evidence, but some limitations merit discussion:
1. Experiments focus on specialized domains (medical/industrial). Generalization to broader domains (e.g., natural images) remains unverified.
2. Total $$\epsilon_* = 8-10$$ (Tab. 6) might be not strict enough for strict DP applications.
Methods And Evaluation Criteria: The proposed methods are well-suited to the problem. PCE addresses PE’s limitations in few-shot scenarios through a Contrastive Filter that leverages inter-class relationships and an Adapted EM with a utility function $u = h \circ g$ normalized to sensitivity ($\Delta_u = 1$), reducing noise dominance.
PCE’s effectiveness is validated through performance gain over PE and other baselines (Table 1). Ablation studies (Table 3) confirm that both components ($g$ and $h$) are essential, supporting the method's relevance to the task.
Theoretical Claims: Yes, the theoretical claims are technically sound. Class-center aggregation reduces bias from boundary samples, while the similarity calibrator ensures that the utility function $u \in [1] $ for stable EM sampling. Additionally, the sequential composition guarantees total $\epsilon_*$-DP, confirming the soundness of the approach.
Experimental Designs Or Analyses: The experimental design is generally sound but has areas for improvement.
1. Generalization could be strengthened by testing on larger-scale or natural image datasets (e.g., ImageNet subsets).
2. Privacy verification lacks empirical attack results (e.g., membership inference) to validate the theoretical analysis.
3. Additionally, the computational cost of 20 iterations may be high for some clients, though the API-based design helps offset local compute limitations.
Supplementary Material: Yes. I especially look atthe impact of different encoders (ResNet-18, CLIP) on PE and PCE performance and the impact of varying privacy costs on PE and PCE performance.
Relation To Broader Scientific Literature: This paper builds on [1], whose similarity voting approach failed in few-shot settings due to noise dominance from the Gaussian mechanism’s high sensitivity. By adopting the Exponential Mechanism, this work improves noise efficiency in DP distillation.
[1] Lin, Z., Gopi, S., Kulkarni, J., Nori, H.,and Yekhanin, S. Differentially private synthetic data via foundation model apis1: Images. In International Conferenceon Learning Representations (ICLR), 2024.
Essential References Not Discussed: As far as I know, no related works that are essential to understanding the (context for) key contributions of the paper, but are not currently cited/discussed in the paper.
Other Strengths And Weaknesses: Strengths
1. First work to address few-shot DP synthesis via APIs.
2. Enables privacy-preserving medical/industrial AI with SOTA performance.
3. Releases code and provides full API/dataset specifications and hyperparameter details.
Weaknesses
1. Tested only on specific datasets, so it’s unclear if the results would apply to more general datasets like ImageNet.
2. Only evaluated at high privacy levels ($\epsilon \geq 4$), so it's unclear if the method would work well with stronger privacy ($\epsilon \leq 1$).
Other Comments Or Suggestions: It would be better if authors can demonstrate the effectiveness of the method on natural images (e.g., ImageNet subsets).
Questions For Authors: Does the method have inherent limitations in natural images or low-$\epsilon$ regimes?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and effort in reviewing our work! Below, we provide responses to address your concerns, with *Lines xxx* and *Section x* referring to specific parts of our paper. We hope these clarifications effectively resolve your concerns.
**Concern 1: Generalization to Broader Domains**
- **As mentioned** in *Section Introduction*, we focus on the few-shot issue, which is a critical challenge especially in specialized domains such as medical fields. Therefore, we prioritize these domains to showcase the value of our PCE. Furthermore, publicly available *specialized* pre-trained large models are scarce, making PCE— which relies solely on *general* pre-trained large model APIs— even more valuable. Our PCE can be applied to any domain to address the few-shot issue, as the underlying concept of using additional inter-class contrastive information is general and adaptable.
- **We include a dataset from the natural domain with a larger number of classes** while keeping other settings fixed and demonstrate that our PCE can also generalize to the natural domain in R-Table 1.
- **Large-scale datasets are out of scope, as our paper focuses on few-shot scenarios.**
**R-Table 1: Top-1 accuracy (%) on Cifar100 (10 private samples per class)**
||Accuracy (%)|
|-|:-:|
PE|24.23|
Our PCE|**34.69**|
**Concern 2: A Larger ϵ\* Range Such as ϵ\* ≤ 1**
- **We follow PE** in using ϵ* near 10 (see Section 5.1.2 in PE's paper) for specialized domains like medicine to ensure a fair comparison.
- **We have provided results** for different ϵ* values ranging from 4 to 20 in *Appendix B*, showed the robustness of our PCE, and analyze their impact on the privacy-utility trade-off. As shown in *Table 6*, ϵ*=8/10 emerges as the optimal choice for balancing privacy and utility, where the accuracy decreases greatly when ϵ*<8 but increases slightly when ϵ*>10, since the private information is limited for few-shot scenarios.
- **We include** results with a wider range of ϵ* in R-Table 2, demonstrating that our PCE consistently outperforms PE.
**R-Table 2: Top-1 accuracy (%) with more values of ϵ* on Camelyon17**
| |0.01|0.1|1|100
|-|:-:|:-:|:-:|:-:|
PE|55.41|55.83|58.65|66.72
Our PCE|**60.02**|**61.78**|**65.38**|**72.68**
**Concern 3: Empirical Attack Results**
- First of all, Differential Privacy (DP) techniques, such as the Exponential Mechanism (EM), **have been widely proven** to resist empirical attacks [1]. Since the privacy protection ability of our PCE is supported by DP, implemented via the EM in PCE, our approach is **guaranteed** to resist empirical attacks. Our primary focus is on addressing the few-shot issue when applying DP techniques.
- **We incorporate a membership inference attack (MIA)** for COVIDx, where the attack model achieves 50.86% success with DP and 65.27% without, showing PCE's protection ability to such attacks. Specifically, we select shadow data from the public [covid-chestxray-dataset](https://github.com/ieee8023/covid-chestxray-dataset), which is similar to COVIDx, and train an attack model until convergence using the ResNet-18 [2] architecture with randomly initialized parameters and default settings of SGD.
[1] Nasr, Milad, Reza Shokri, and Amir Houmansadr. "Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning." IEEE symposium on security and privacy (SP), 2019.
[2] He, Kaiming, et al. "Deep residual learning for image recognition." CVPR 2016.
**Concern 4: Computational Cost on Clients**
Our PCE can be *applied to real* scenarios across resource-constrained clients (*Line 258*), as the client-side computational cost is negligible, limited to feature extraction (with a small encoder and few-shot private data) and minimal distance calculations. R-Table 4 further demonstrates this. The API service requires no local computational resources.
**R-Table 3: Total time cost (seconds) under 20 iterations.**
| |COVIDx|Came17|KVASIR-f|MVAD-l
|-|:-:|:-:|:-:|:-:|
Our PCE|11.17|9.59|18.68|17.82
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed rebuttal. The authors have addressed the key concerns well. Overall, the rebuttal significantly improves the submission. I’m upgrading my score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thoughtful reviews and encouraging words. We're truly grateful that you took the time to read our rebuttal carefully and that our responses addressed your concerns. Your support and feedback mean a lot to us. | Summary: This paper introduces a new method for generating synthetic images under differential privacy called Private Contrastive Evolution (PCE).
This method is designed for the case of few-shot private data, which is prevalent in healthcare, using a generative model behind an API.
PCE works by initializing a synthetic image dataset using a text-to-image API from the class labels. This synthetic dataset is iteratively updated for a fixed number of rounds by privately selecting an image from the synthetic dataset for each class that mostly closely resembles the private images and updating the synthetic dataset to more closely resemble the selected images.
The authors conduct experiments on various datasets within the problem domain and find that PCE outperforms the existing API-assisted baseline: Private Evolution (PC).
Claims And Evidence: The proposed method PCE outperforms the competitor method PE and several data-independent baseline methods across four image datasets for the few-shot case. At present, these experiments consider performance at a single privacy budget (either $\epsilon = 8$ or $\epsilon = 10$). The paper would be improved by considering comparative performance on additional privacy budgets. It is typical consider budgets such that $\epsilon \in (0.1, 10)$. Since this is in the image domain, it may be worth considering $\epsilon \in (0.01, 100)$.
Methods And Evaluation Criteria: The proposed methods and datasets are appropriate for the problem.
Theoretical Claims: The privacy proof and the definitions appear correct. However, it appears that the method and can be strengthened by applying parallel composition to the exponential mechanism step at each iteration. This is possible because the private images are partitioned by class and only those of a given class are used in the exponential mechanism. In this case, you still need to sequentially compose the iterations of the algorithm but can parallel compose the class iterations. With respect to the privacy analysis, you can achieve the same DP guarantee by setting the $\epsilon = \frac{\epsilon_*}{T}$ rather than $\epsilon = \frac{\epsilon_*}{TC}$.
Experimental Designs Or Analyses: The experiments appear well-designed but could be improved by considering additional privacy budgets. For the image domain, it may be reasonable to consider budgets in the range of $\epsilon \in (0.01, 100)$.
Supplementary Material: I reviewed appendices A-E.
Relation To Broader Scientific Literature: The proposed method improves on the Private Evolution method from Lin et al. 2024 for the case of few-shot private synthetic image generation using APIs. This paper does not introduce new methods into the broader literature but successfully applies known building blocks to a specific problem (private synthetic image generation) to improve on existing approaches.
Essential References Not Discussed: I am not aware of essential references that are not discussed.
Other Strengths And Weaknesses: Figure 2 was helpful to understand Alg. 1.
Other Comments Or Suggestions: Some typos:
>Private Evaluation (PE)
Line 86, 2nd column: Should read "Private Evolution"
>that near distribution boundary
Line 155. 2nd column
Questions For Authors: It would be interesting to know at roughly which dataset size does PE overtake PCE. Or in cases of class imbalanced data e.g. healthy images vs sick images, should the larger class use the Gaussian mechanism applied to a voting histogram and the smaller class use the exponential mechanism as implemented by PCE?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and effort in reviewing our work! Below, we provide responses to address your concerns and suggestions for extensions, with *Lines xxx* and *Section x* referring to specific parts of our paper. We hope these clarifications effectively resolve your concerns. We also thank you for your creative suggestions for extensions!
**Concern 1: Additional Privacy Budgets for ϵ***
- **We follow PE** in using ϵ* near 10 (see Section 5.1.2 in PE's paper) for specialized domains like medicine to ensure a fair comparison.
- **We have provided results** for different ϵ* values ranging from 4 to 20 in *Appendix B*, showed the robustness of our PCE, and analyze their impact on the privacy-utility trade-off. As shown in *Table 6*, ϵ*=8/10 emerges as the optimal choice for balancing privacy and utility, where the accuracy decreases greatly when ϵ*<8 but increases slightly when ϵ*>10, since the private information is limited for few-shot scenarios.
- **We include** results with a wider range of ϵ* in R-Table 1, demonstrating that our PCE consistently outperforms PE.
**R-Table 1: Top-1 accuracy (%) with more values of ϵ* on Camelyon17**
| |0.01|0.1|1|100
|-|:-:|:-:|:-:|:-:|
PE|55.41|55.83|58.65|66.72
Our PCE|**60.02**|**61.78**|**65.38**|**72.68**
**Concern 2: At what dataset size does PE overtake PCE?**
We analyzed the impact of the few-shot dataset size ($K$) in *Section CAS w.r.t. K-Shot Private Data* and *Figure 3*, finding that PCE maintains superiority over PE when $K \leq 100$. In R-Table 2, we extend our analysis to a larger $K$ and observe that **PE outperforms PCE slightly (0.51%) when $K \ge 1000$**. *However, $K \ge 1000$ does not represent a few-shot setting*, while PCE is best suited for few-shot scenarios.
**R-Table 2: Top-1 accuracy (%) with more private data (large $K$) on KVASIR-f**
| |$K=500$|$K=1000$|
|-|:-:|:-:|
PE|61.32|**65.53**|
Our PCE|**62.47**|65.02|
**Suggestion for Extension 1: Parallel Composing for DP**
We really appreciate your suggestions on strengthen our method with parallel composition.
However, since our PCE addresses the few-shot issue by leveraging inter-class contrastive information through the contrastive filter ($g$), it requires access to private data across all classes when evaluating each synthetic sample, making parallel composition challenging. Therefore, we believe that parallel composition deserves a separate paper in the future.
**Suggestion for Extension 2: Combining PE and PCE for Class Imbalance Scenarios**
We thank you again for your suggestions on combining PE and PCE for class imbalanced data.
Following your suggestion, for class imbalanced scenarios, we combine PE and PCE, terming it PE+PCE. Specifically, in PE+PCE, PE is applied to the majority private classes, while PCE is used for minority (few-shot) classes. We compare PE, PCE, and PE+PCE in a class imbalanced experiment as follows. Specifically, we design an imbalanced dataset with 1,000 samples from class 0 (normal) and 10 samples from class 1 (breast cancer with tumor tissue) based on the Camelyon17 dataset, while keeping all other settings unchanged (still generating 100 samples per class). The results can be found in R-Table 3.
In this class-imbalanced experiment, both PE and PCE perform better with more private data, but PCE still outperforms PE. Since one class remains few-shot, PE suffers from noise overwhelming the actual votes, leading to nearly random similarity voting and selection for the synthetic data in this minority class. In contrast, PCE alleviates this few-shot issue, providing more informative scores to evaluate the synthetic data. By leveraging PE's advantage in majority private classes and using PCE to address the issues in minority (few-shot) classes, PE+PCE performs the best.
However, simply replacing PCE with PE for majority private classes limits improvement by losing inter-class contrastive information. Future work needs a more creative combination of PE and PCE to fully leverage their advantages.
**R-Table 3: Top-1 accuracy (%) on class-imbalanced Camelyon17**
||Accuracy (%)|
|-|:-:|
PE|72.36|
PCE|74.58|
PE+PCE|**75.39**|
**Suggestion for Typos**
Thank you for pointing out the two typos. We will correct them in the revised version. | Summary: The authors present an interesting approach to an API-assisted algorithm called Private Contrastive Evolution (PCE) to address the challenge of generating high-quality differentially private (DP) synthetic images from few-shot private data using generative APIs.
The authors introduce a contrastive filter to exploit inter-class contrastive relationships within the few-shot private data, enhancing the class-discriminability of synthetic images.
Another interesting point is that the exponential mechanism was adapted to preserve private inter-class contrastive relationships, addressing the excessive noise issue from the high sensitivity of the Gaussian Mechanism. A similarity calibrator is designed to prioritize high-quality synthetic data that closely resembles private data.
Claims And Evidence: In relation to the claim about the effectiveness of the "few-shot" problem in differentially private synthetic image generation. Although the numerical results show significant improvement over the baseline (PE), I believe the paper does not present sufficiently detailed evidence regarding the isolated impact of the contrastive filter versus the similarity calibrator. I suggest an ablative analysis to determine which component of the method actually solves or mitigates the few-shot problem.
The broad generality and adaptability to different generative APIs are also not clear as the tests performed are limited to three related APIs (Stable Diffusion, SD+IPA, OJ), all based on the same paradigm. I think there are no clear tests with structurally distinct or closed commercial APIs. That will be a very interesting and more comprehensive analysis.
One point I like about robust privacy protection is related to the use of differential privacy. I think the approach is theoretically sound, but the justification for the exact values of the privacy parameters (ϵ*) is superficial. I suggest the authors elaborate a better discussion about the specific choice of these values and their impact on the trade-off between privacy and utility.
Methods And Evaluation Criteria: I found the use of differential privacy mechanisms combined with contrastive (interclass) techniques and adapted exponential mechanisms really interesting, especially considering the context of the problem.
The selected datasets related to COVIDx (medical images), Camelyon17 (tumor tissues), KVASIR-f (endoscopic images), and MVtecAD-l (industrial defects) represent practical scenarios.
I agree that the metric used is well-established in the literature and makes sense. However, I miss other qualitative metrics (e.g., FID - Fréchet Inception Distance), visual analyses, or specific metrics for the visual quality of the images.
Theoretical Claims: I was especially interested in theorem 4.1 (Privacy analysis of the PCE method), which states that the PCE algorithm satisfies the differential privacy property with the total parameter ϵ. The presented proof uses the classical definitions of sequential composition and the exponential mechanism (EM), demonstrating that the repeated application of the exponential mechanism in each iteration of the algorithm satisfies the global differential privacy requirement due to the sequential composition of the mechanisms. Seems ok to me, but it could be improved with a more detailed discussion on the practical justification of the values chosen for the parameter ϵ. I am especially concerned about the trade-off between utility and privacy in the real context of the data used.
Experimental Designs Or Analyses: I tried to map some aspects of the experiment as:
- selection of experimental datasets. The chosen are recognized benchmarks ensuring comparability.
- selection of baselines. The comparisons are extensive and use recent methods such as PE (Lin et al., 2024), as well as variants (DPImg, RF, GCap, etc.).
- metrics used for Top-1 accuracy are standard in the literature.
- ablative and sensitivity studies help to understand some model parameters.
I miss qualitative and visual assessments, some kind of evaluation of the computational cost for real scenarios, and a detailed analysis of the privacy-utility trade-off as discussed previously.
Supplementary Material: I used the supplementary material to understand some experiments.
Relation To Broader Scientific Literature: I found the main contribution to be directly aligned with the foundations of differential privacy, which introduced the formal concept of DP and privacy assurance mechanisms. The proposed method specifically uses the exponential mechanism and adapts it in an innovative way to the synthetic image generation scenario, which is very interesting.
The use of interclass contrastive relations, which is very important in the proposed method, is related to an established line of literature on contrastive learning and metric learning to allow learning robust representations with little available data.
Essential References Not Discussed: I am ok with the references.
Other Strengths And Weaknesses: The use of only quantitative metrics (Top-1 accuracy) to assess the results is critical as it may mask situations where synthetic images have high quantitative accuracy but low perceptual quality or insufficient diversity.
I miss a detailed justification for the values of differential privacy ϵ to understand the effectiveness of the privacy protection offered.
I liked the idea of using limited computing resources, but I missed the quantitative or qualitative assessment of the actual cost involved in multiple calls to external APIs. The evaluated APIs have the same paradigm. I was thinking about the performance of other approaches.
Other Comments Or Suggestions: NA
Questions For Authors: I have made some comments and suggestions in the previous analysis. If I made some mistake or missed some point, please let me know.
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and effort in reviewing our work! Below, we provide responses to address your concerns, with *Lines xxx* and *Section x* referring to specific parts of our paper. We hope these clarifications effectively resolve your concerns.
**Concern 1: An Ablative Analysis**
We have only two components: the contrastive filter ($g$) and the similarity calibrator ($h$). In *Section Ablation Study* (page 8), **we have provided an ablative analysis** of these two components. We *remove each component individually* to evaluate their effects. As shown in *Table 3*, eliminating either $g$ or $h$ results in a significant performance drop. **$g$ and $h$ must function together to address the few-shot issue; neither can be used in isolation.** While $g$ leverages inter-class contrastive information, it introduces a trade-off by sacrificing similarity measurement, which is then compensated by $h$.
**Concern 2: Other APIs for More Comprehensive Results**
- **The reason for choosing Stable Diffusion, SD+IPA, and OJ** is that diffusion models dominate image generative models, and these three are among the most widely used and recognized approaches [1].
- **We add** a structurally distinct *Transformer-based* API based on FLUX.1-dev, along with the closed commercial API GPT-4o, and show our PCE's generality to these new distinct APIs in R-Table 1 and R-Table 2.
**R-Table 1: Top-1 accuracy (%) with FLUX.1 API**
| |COVIDx|KVASIR-f|
|-|:-:|:-:|
PE|52.38|45.67|
Our PCE|**58.58**|**55.83**|
**R-Table 2: Top-1 accuracy (%) with GPT-4o API**
| |COVIDx|KVASIR-f|
|-|:-:|:-:|
PE|62.32|56.32|
Our PCE|**70.41**|**64.52**|
[1] Croitoru, Florinel-Alin, et al. "Diffusion models in vision: A survey." IEEE Transactions on Pattern Analysis and Machine Intelligence 45.9 (2023).
**Concern 3: Impact of Privacy Parameters ϵ***
- **We follow PE** in using ϵ* near 10 (see Section 5.1.2 in PE's paper) for specialized domains like medicine to ensure a fair comparison.
- **We have provided results** for different ϵ* values ranging from 4 to 20 in *Appendix B*, showed the robustness of our PCE, and analyze their impact on the privacy-utility trade-off. As shown in *Table 6*, ϵ*=8/10 emerges as the optimal choice for balancing privacy and utility, where the accuracy decreases greatly when ϵ*<8 but increases slightly when ϵ*>10, since the private information is limited for few-shot scenarios.
- **We include** results with a wider range of ϵ* in R-Table 3, demonstrating that our PCE consistently outperforms PE.
**R-Table 3: Top-1 accuracy (%) with more values of ϵ* on Camelyon17**
| |0.01|0.1|1|100
|-|:-:|:-:|:-:|:-:|
PE|55.41|55.83|58.65|66.72
Our PCE|**60.02**|**61.78**|**65.38**|**72.68**
**Concern 4: Qualitative Metrics**
- **Using CAS (top-1 accuracy) aligns with our goal.** Our objective is to achieve high quantitative accuracy in downstream models, which directly reflects the value of synthetic data for these tasks. **As mentioned in our paper** (*Lines 254-256*), the CAS metric (top-1 accuracy) is widely used for assessing the quality of synthetic datasets in downstream tasks. This is also supported by prior influential work [2], which states that *"traditional GAN metrics such as Inception Score [3] and FID are neither predictive of CAS nor useful when evaluating non-GAN models."*
- **We have provided visual analyses with high perceptual quality** in *Section Synthetic Images* (page 7). From *Figure 6*, our PCE can generate visually high-quality images that align with downstream tasks. The FID (↓) value on MVAD-l is 8.47 for PCE but 58.14 for PE.
- **Metrics for perceptual quality pose risks.** While visually high-quality synthetic data may appear realistic, it can exhibit high similarity (low diversity) to private data, reducing its informativeness and utility for downstream tasks.
- **Our PCE is also effective in diversity metrics.**
- Our PCE is designed to enhance diversity. The core of PCE lies in its contrastive filter ($g$), which explicitly improves class discrimination.
- On MVAD-l, PCE achieves 12.51, whereas PE only reaches 82.17 in the Inception Score (↓).
[2] Ravuri, Suman, and Oriol Vinyals. "Classification accuracy score for conditional generative models." NeurIPS (2019).
[3] Chong, Min Jin, and David Forsyth. "Effectively unbiased fid and inception score and where to find them." CVPR 2020.
**Concern 5: Computational Cost**
Our PCE can be *applied to real* scenarios across resource-constrained clients (*Line 258*), as the client-side computational cost is negligible, limited to feature extraction (with a small encoder and few-shot private data) and minimal distance calculations. R-Table 4 further demonstrates this. The API service requires no local computational resources.
**R-Table 4: Total time cost (seconds) under 20 iterations.**
| |COVIDx|Came17|KVASIR-f|MVAD-l
|-|:-:|:-:|:-:|:-:|
Our PCE|11.17|9.59|18.68|17.82 | null | null | null | null | null | null |
Kandinsky Conformal Prediction: Beyond Class- and Covariate-Conditional Coverage | Accept (poster) | Summary: The present paper introduces an approach to weighted conformal prediction that employs quantile regression as a solution. The authors study general weights depending both on covariates and labels and provide theoretical guarantees via high probability upper bounds. Several particular examples of weighting functions are considered as corollaries of general results. Online setup is also considered separately. Some numerical illustrations are provided.
## Update after rebuttal
I appreciate the review by authors. I think that the paper is generally sound and, while not having a single big contribution, improves the existing results in mainly generality of approach and sharpness of theory. I was slightly confused by the rebuttal by authors where at several places they were overclaiming the contributions (which is actually much less present in the paper itself). In particular, there is no new algorithm in the paper (adding Y as input doesn't lead to new algorithm). I increase my score by 1 point to acknowledge my general positive impression about the paper. However, I recommend the authors to further improve the clarity of the paper. For example, the introduction of randomized score is not adequately done in the paper.
Claims And Evidence: There are 3 main claims in the paper:
1. First study of the general formulation of the (group) weighted conformal prediction which considers weights depending both on labels and covariates.
While, to the best of my understanding, previous work indeed didn't consider the weights dependent on labels in the context of group-conditional coverage, it is not clear whether this significantly changes the methods and and the theoretical analysis. Thus, I currently don't see whether this claim, while being formally correct, represents a substantial novelty with respect to existing literature.
2. Introduction of the new method for conformal inference.
The introduced method coincides with already existing methods except for using labels as input variables for quantile regression. I don't think that calling the method a new one is warranted.
3. Experimental evaluation of the proposed method
The experimental evaluation is very lightweight with just two datasets considered. For more theoretical paper, I don't necessarily see it as a drawback but also don't think that this part of the paper represents a strong contribution.
Methods And Evaluation Criteria: The methodological part of the paper looks correct.
Theoretical Claims: I didn't check the proofs line by line but the results well correspond to the existing literature and look sound. I guess an improvement of $d^{1/2}$ in one of the bounds is due to explicit consideration of finite dimensional weight function (as opposed to the general VC class) but I might be wrong.
Experimental Designs Or Analyses: I think that the experiments are generally sound though only two datasets are considered. I am somewhat confused with the choice of baselines as I don't see which of the methods uses only covariates for quantile regression (without using the labels).
Supplementary Material: I briefly looked through the proofs of main results.
Relation To Broader Scientific Literature: I think that the paper generally well relates to the existing literature mentioning the majority the relevant papers. However, the corresponding discussion is not complete as the authors do not elaborate on why and how the paper succeeds to generalize the previous works. More precisely, it is not clear why considering $w(X, Y)$ is more challenging than considering just $w(X)$.
Essential References Not Discussed: The authors generally relate the paper well to the existing literature on conformal prediction with distribution shifts. However, they claim that weights depending both on X and Y were not considered in the literature. However, it is not correct as the paper [1] considers general weights. Also, that paper performs the analysis generally similar to the one performed in the present paper except that it does not consider a VC class for the weight function.
[1] Plassier, Vincent, et al. "Efficient conformal prediction under data heterogeneity." International Conference on Artificial Intelligence and Statistics. PMLR, 2024.
Other Strengths And Weaknesses: I generally keep up with the literature in this area though I was not aware of the couple of recent (2024) papers on the topic that are referenced by authors.
Other Comments Or Suggestions: I think that the additional major issue with the paper is writing which is while being generally clear, fails to deliver by what means the paper achieves the results that are claimed. It includes the explanation of algorithmic and theoretical advances which are not properly explained (or their novelty is not explained to be precise). One of the particularly odd issues with writing is absence of any discussion of the score function. In particular, the authors introduce randomized score function $\tilde{S}$ but failed to find in the paper any discussion of this function or example. As one more example, Section 4 introduces test-time algorithm and discusses its drawbacks (computational complexity). However, it doesn't discuss any benefits of this procedure which makes the reader confused.
Questions For Authors: 1. What are the algorithmic innovations of the paper (if any)?
2. What are the difficulties in theoretical analysis (if any) which one needs to overcome considering $w(X, Y)$ compared to simpler case of $w(X)$?
3. Why you obtain the rate which us better by $d^{1/2}$ than previous works?
4. What are the benefits of considering test-time algorithm?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank you for your review. Below, we address your questions:
**Q1**. Most conformal prediction algorithms share a common structure, involving quantile regression followed by constructing prediction sets via comparison with a quantile threshold. However, significant differences emerge in the precise choice of the quantile function class and the selection of samples used for regression—these subtle yet critical choices underpin our algorithmic contributions.
*Weight functions jointly dependent on covariates and labels*: Although inspired by Gibbs et al. (2023), we generalize their linear quantile regression approach from weight functions defined solely on covariates to those defined jointly on covariates and labels. This generalization is key to achieving broader and practically relevant coverage guarantees (e.g., overlapping and latent groups).
*Efficient quantile regression without monotonicity assumptions*: While our Alg. 3 directly extends Gibbs et al. (2023), ensuring computational efficiency is non-trivial due to the loss of monotonicity structure utilized in their method. This monotonicity structure no longer holds when we consider groups jointly defined by $X$ and $Y$. To address this, our main method, Alg. 1, extends the computationally efficient quantile regression technique from Jung et al. (2023)—originally limited to covariate-conditional guarantees—to handle weighted coverage guarantees. This algorithm avoids the need to solve a quantile regression problem for all candidates y at test time.
*Generalization of Mondrian prediction sets*: Alg. 2 further adapts the classic Mondrian conformal prediction approach to our generalized weighted setting.
**Q2**. The primary challenge arises because both the conformity score and the quantile $\hat{q}$ depend on both $X$ and $Y$. For example, in the analysis of quantile regression we want to bound the number of calibration samples $(X_i, Y_i)$ for which the score equals the quantile. In general, when this number is small, you can get a more accurate quantile estimator. As in prior work, we do this by conditioning on the random variables that fix the quantile’s value. However, in our case, if the score function is deterministic, then fixing the values of $X$ and $Y$ fully determines the score (as opposed to Gibbs et al. (2023) where they only need to condition on $X$). To address this, we show that it suffices to condition on $\Phi (Χ, Y)$, which preserves uncertainty in the score. For non-continuous $S|\Phi$, we can consider any randomized score function $\tilde{S}$ that satisfies the continuity assumption.
Another challenge arises when given a quantile function $\hat{q}$ and the features of a test point $X$, we want to compute the prediction set. The issue is that both the score function and the quantile depend on the value of $X$ and $y$. Inspired by Mondrian conformal prediction, we obtain the prediction sets by returning all the labels y where the score $S(X,y) \leq \hat{q}(X,y)$.
**Q3**. Areces et al. (2024) prove uniform convergence of the weighted coverage for the entire function class. We sharpen the dependence on $d$ by focusing on the basis weight functions, since the function class is a vector space.
**Q4**. The alternative test-time algorithm in Sec. 4 achieves a different type of coverage guarantee based on exchangeability arguments. Specifically, this guarantee incorporates randomness over both the calibration dataset and the test point. In contrast, Alg. 1 and 2 provide a training-conditional guarantee, conditioning only on the fixed realization of the calibration data. Although these two types of guarantees are not directly comparable, exchangeability-based guarantees generally yield smaller coverage error bounds, as in our results in Sec. 4.
**“the paper [1] considers … weight function.”**
Thank you for bringing [1] to our attention. Both this paper and ours address potential distribution shifts between the calibration and the test data, where the distribution over the features and the label changes. The main difference between the two papers is that we get coverage guarantees for a class of distribution shifts, while they consider a single distribution shift that they can estimate using samples from the test distribution.
**“particularly odd issues … discussion of the score function.”**
In Sec. 3 we mention that “As is common in conformal prediction, our method ensures the desirable coverage for any score function $S$ [that satisfies the continuity assumption]”. For deterministic scores violating the continuity assumption, we mention that “we allow for the use of a randomized score function to break potential ties in quantile regression while keeping the assumptions about distribution $D$ minimal”. In our experiments we use the deterministic Conformalized Quantile Regression score function (Romano et al., 2019), and the randomized Adaptive Prediction Sets (APS) score function (Romano et al., 2019; Ding et al.,2023).
---
Rebuttal Comment 1.1:
Comment: I appreciate the answer by reviewers. I have several additional questions:
1. "Alg. 1, extends the computationally efficient quantile regression technique from Jung et al. (2023)—originally limited to covariate-conditional guarantees—to handle weighted coverage guarantees". Algorithm 1 is a standard quantile regression formulation. Essentially, it is even not an algorithm as no procedure for solving the problem is provided. What do you mean precisely?
2. "Alg. 2 further adapts the classic Mondrian conformal prediction approach to our generalized weighted setting." The algorithm two is just comparison the score value with the threshold. What kind of adaptation do you mean?
3. Your answer on Q2: can you point to the line(s) in the proof(s) which execute(s) this conditioning?
4. Regarding the randomness in the score function what I mean that it appears in the paper from nowhere. There are no examples, no introduction of $\epsilon_i$, no discussion why is it needed.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply. To answer your questions:
1. You are correct that our Alg.1 takes the same form of linear quantile regression as Jung et al. (2023). However, a key difference lies in the function class over which we perform quantile regression. In comparison, Jung et al. fit quantile regressors over features of the form $\phi(x)$ (i.e., functions of covariates alone), but we perform quantile regression over a richer feature class $\phi(x, y)$, which depends jointly on both covariates and labels. This change requires a new way to compute the prediction set test time (our Alg. 2). We agree that our previous response could be clearer—a more precise statement would be that our combination of Alg. 1 and Alg. 2 extends the approach of Jung et al.
We will explain how Alg. 2 is different in the answer to your next question.
2. The key distinction with Mondrian conformal prediction is that our method applies a per-example threshold for each candidate label $y$, based on the pair $(X_{n+1}, y)$. This threshold is obtained by evaluating a quantile regression model at $\phi(X_{n+1}, y)$. In contrast, Mondrian conformal prediction applies a per-group threshold: it first assigns $(X_{n+1}, y)$ to a discrete group, then uses a fixed threshold associated with that group. As a result, Mondrian conformal prediction requires the groups to be disjoint.
In addition, while Jung et al. (2023) can handle overlapping groups, their per-example threshold function is only a quantile regression function of the covariates $X_{n+1}$. As a result, they can apply the same threshold for all candidate labels y.
3. In the proof, the conditioning on $\Phi$ occurs in L1004-L1038.
4. Thank you for clarifying your question. In the paper, we did formally introduce the noise $\varepsilon_i$ (left column of L176-181), an example of randomized Adaptive Prediction Sets (APS) score function (right column of L 356-357; detailed description in L 1090), and a discussion of why randomization is needed (left column L 173-176; in addition, a description of how it shows up in our theorem statement in the paragraph of left column L 219). To re-state our discussion here: “... the use of a randomized score function to break potential ties in quantile regression, while keeping the assumptions about distribution D minimal.”
We are happy to expand the discussion to convey the following point: our method ensures the desirable coverage for any score function that satisfies the continuity assumption that $S|\Phi$ is continuous. For deterministic scores violating the continuity assumption, we need to randomize the score. | Summary: The paper proposes Kandinsky Conformal Prediction method for general group-conditional guarantees in contrast to Mondrian conformal prediction that ensures conditional coverage over a disjoint set of groups. Their framework handles overlapping and fractional group memberships, and allows for group memberships to be jointly defined by covariates and labels. They show this method is computationally efficient as it requires solving a single quantile regression over the vector space spanned by group functions; and statistically optimal as it matches the minimax-optimal conditional coverage error rate with finite samples. Through empirical evaluation on ACSIncome and CivilComments datasets, authors demonstrate their method achieves the best conditional coverage as measured by best group average coverage deviation and stable average coverage deviation with the increase in number of groups.
Claims And Evidence: The claims made in the submission regarding their general formulation of conditional coverage are supported by their theory (Theorem 3.1, Corollary 3.4) and accompanying proofs. The experiments empirically validate these claims.
Methods And Evaluation Criteria: Yes, the proposed method provides a general framework for conditional conformal prediction, where the weight function class can be adapted to obtain group-conditional guarantees, Mondrian conformal prediction, conformal prediction with fractional group membership, and distribution shifts. The evaluation metric of group average coverage deviation also makes sense.
Theoretical Claims: I went over the proofs for Theorem 3.1 and Corollary 3.4 in the Appendix.
Experimental Designs Or Analyses: I checked the experimental analysis for both ACSIncome and CivilComments datasets. I do not see any issues with the current design choices, but there is definitely scope to make the empirical evaluation more extensive (in terms of datasets, score functions, as well as choice of the function class).
Supplementary Material: I reviewed the full supplementary material.
Relation To Broader Scientific Literature: This paper expands the existing conditional coverage guarantees to allow overlapping groups and fractional memberships that are defined over both covariates and labels. As the paper mentions, when weight functions are based on covariates, the method provides the same type of guarantees as in past work but with tighter bounds on expected weighted coverage deviation.
Essential References Not Discussed: I feel the paper is fairly complete in its discussion of important references for understanding the context.
Other Strengths And Weaknesses: I appreciate the general framework proposed and the flexibility it offers to obtain the desired coverage guarantees for several applications. I also found the paper to be well-written and the proofs very detailed.
Refer to questions below for specific concerns regarding empirical evaluation.
Other Comments Or Suggestions: I would like to see more details regarding the implementation of the algorithm (e.g., expanding section A.3), especially as the code was not shared in the current submission.
Questions For Authors: Is there any reason why comparison with Gibbs et al., 2023 and Ding et al., 2023 (class conditional conformal prediction) is not included in the experiments? I believe this would help understand the performance improvements offered by the proposed method in the very specific cases introduced in the paper.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to thank you for your review. We address your question and your comment below.
### 1. I would like to see more details regarding the implementation of the algorithm (e.g., expanding section A.3), especially as the code was not shared in the current submission.
This links to the code: https://doi.org/10.5281/zenodo.15104271. We will include the code in the publication and we are ready to share further implementation details.
### 2. Is there any reason why comparison with Gibbs et al., 2023 and Ding et al., 2023 (class conditional conformal prediction) is not included in the experiments? I believe this would help understand the performance improvements offered by the proposed method in the very specific cases introduced in the paper.
We have compared Kandinsky conformal prediction with class conditional conformal prediction with the CivilComments dataset. However, the class conditional method is not implementable for the regression task in ACSIncome. The method of Gibbs et al., 2023 does not scale for the image tasks we consider because it trains the image classifier from scratch to learn the weight functions. This violates the post-hoc nature of conformal prediction. Kandinsky trains the group classifier using the two-dimensional input of logits and labels, which is scalable.
---
Rebuttal Comment 1.1:
Comment: 1. Thanks for sharing the code. Please expand the final paper to add further implementation details. If you can share the details in your response, that would also be helpful.
2. Is the class-conditional method Mondrian CP or clustered CP from Ding et al.? This detail is missing I suppose, and it would be helpful to add both as the performance depends on the specific regime as discussed in Ding et al. Also, I do not agree that Gibbs et al. method will not scale and that it *trains the image classifier from scratch*. It updates the quantile fit at test time -- while it is expensive to update the fit for each test point, the implementation shared by the authors uses a small number of iterations in practice. The experiments in their paper for image classification as well as experiments I have seen on datasets of larger scale validate that the experiments are feasible. Please consider updating your results in light of this or feel free to clarify anything that is missing. Also, which image tasks are you referring to -- I can only see experiments on ACSIncome and CivilComments.
Overall, I still believe the empirical evaluation is limited in demonstrating the benefits of the method.
---
Reply to Comment 1.1.1:
Comment: We would like to thank you for your response! To answer your remaining questions:
### 1. Is the class-conditional method Mondrian CP or clustered CP from Ding et al.?
We run class conditional conformal prediction on CivilComments, which has two classes. In this setting, Mondrian CP and clustered CP from Ding et al. yield the same algorithm. This is because clustered CP treats each class as a distinct cluster.
### 2. Also, I do not agree that Gibbs et al. method will not scale and that it trains the image classifier from scratch. It updates the quantile fit at test time -- while it is expensive to update the fit for each test point, the implementation shared by the authors uses a small number of iterations in practice.
We are referring to a language classifier (not an image classifier) in our discussion. We apologize for the earlier typo. Regarding Gibbs et al.'s approach: While they update the model fit for each test point with a relatively small number of iterations, they still require training a language classifier from scratch. Here the language classifier refers to the classifier used to predict group memberships, which forms the basis for constructing the weight function class. | Summary: The paper proposes an extension of the conditional coverage works of Jung et al. and Gibbs et al. to functions of covariates and labels, not just covariates as in these previous works. Experiments on ACSIncome and CivilComments datasets are included.
Claims And Evidence: Claims are supported by the evidence in the paper.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I did not carefully check the proofs.
Experimental Designs Or Analyses: Although the analysis looks correct, there is no code available to double check.
Supplementary Material: N/A
Relation To Broader Scientific Literature: Extending the results of GCC'24 to test-label conditional coverage guarantees.
Essential References Not Discussed: See Blot et al. https://arxiv.org/abs/2406.17819 for an extension of the GCC paper to also include weight functions that depend on the label as well as the covariate. The algorithm in Section 4 should be subsumed by this paper, although the two-sided bound is new.
Other Strengths And Weaknesses: The paper provides the same flavor of bounds as GCC'24, but for weight functions that depend on both X and Y. Some initial work in this direction was completed by Blot et al (https://arxiv.org/abs/2406.17819), but this paper takes it yet another step further by proving a larger set of results including two-sided bounds.
It is a reasonable contribution to the field to extend the GCC'24 analysis to these , although it seems to include no highly novel technical devices. The experiments are reasonable for the purpose of demonstrating validity of these results, although not extraordinary.
Other Comments Or Suggestions: All the arguments in GCC'24 already hold for randomized score functions. There is no novelty in using a randomized score in the quantile regression.
Questions For Authors: Unclear what randomness wCD is integrating over. if C is an argument, then it can be randomized (as in the case of a random score). Then in what sense does the inequality in Theorem 3.1 hold? Almost surely?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback. We address your concerns below.
### 1. “The paper provides the same flavor of bounds as GCC'24, but for weight functions that depend on both X and Y. Some initial work in this direction was completed by Blot et al (https://arxiv.org/abs/2406.17819), but this paper takes it yet another step further by proving a larger set of results including two-sided bounds.”
Thank you for pointing us to the paper by Blot et al. In our paper, we provide results that are stronger than Blot et al. in two ways. First, Algorithm 3 and the bound in Theorem 4.1 in our paper have a similar flavor to the results in Section 2.2 of Blot et al. However, their result is a one-sided bound, whereas we derive a two-sided bound assuming that the data are i.i.d. and that the distribution of $S(X,Y)|\Phi(X,Y)$ is continuous. Second, in Section 3 we obtain high probability bounds (that are in general stronger than expectation bounds) using Algorithms 1 and 2, under the same assumptions. We will incorporate this citation and discussion into our paper.
### 2. All the arguments in GCC'24 already hold for randomized score functions. There is no novelty in using a randomized score in the quantile regression.
The use of randomized score functions is not the key contribution of our work. The main reasons why randomized score functions are used in Gibbs et al. (2023) and our paper though are somewhat different. In Gibbs et al. (2023) the authors use randomized score functions to obtain exact coverage. The difficulty when extending quantile regression to work for weight functions of the form $w(X,Y)$ is that conditioning on $X$ and $Y$, the value of a deterministic score function $S$ is fully defined (as opposed to the setting in Gibbs et al. (2023)). Therefore, we add randomization to the score function as one of the ways to overcome the obstacle of $S(X,Y)|X,Y$ not being continuous for a deterministic $S$.
### 3. Unclear what randomness wCD is integrating over. if C is an argument, then it can be randomized (as in the case of a random score). Then in what sense does the inequality in Theorem 3.1 hold? Almost surely?
In Section 2 we define wCD as an expectation over the test point $(X,Y)$ and the internal randomness of $\mathcal{C}$ (which exists when as you mention we use a randomized score of the test point). Since wCD is by definition integrating over the randomness of $\mathcal{C}$, in Theorem 3.1 the dependence on the randomness of the score of the test point is accounted for in wCD. To explain things further, the guarantee in Theorem 3.1 holds with high probability over the randomness of the calibration dataset and the noise $\{\varepsilon_i\}_{i\in [n]}$ that is used to randomize the scores of the calibration data points. | Summary: This paper is considering the problem of conformal prediction with conditional guarantees, particular allowing the conditioning event to be both a function of covariate X and label Y. They build upon the previously proposed technique in the literature which is based on training a linear quantile regression over the linear span of some predefined basis. This basis would then capture a finitely many conditioning events, which in this paper can be a function of X and Y. They prove both PAC-type and "averaged over calibration"-type coverage guarantees and have a number of experiments backing up their claims.
Claims And Evidence: I believe all the claims in the paper are accurate. They accompany their propose method with a number of special cases that makes everything crystal clear.
Methods And Evaluation Criteria: The evaluation methods make sense to me.
Theoretical Claims: All the theoretical claims make sense. I have looked through their proofs, the proofs are very similar to the work of [Gibbs et al 2024], which is based on analyzing the sub gradients of the quantile regression. The resemblance is natural as they are generalizing their method. That being said, it might be the case that I have missed some small errors, as I did not check all the equations one by one.
Experimental Designs Or Analyses: They make sense to me.
Supplementary Material: I have looked at the proofs.
Relation To Broader Scientific Literature: They broaden the use case of conformal prediction in the scenarios where conditional coverage is important. The extra dependance on Y makes this framework more flexible from the application point of view as it can model more advanced subpopulations of the data or more advanced distribution shifts (beyond covariate shift).
Essential References Not Discussed: Not that I know of.
Other Strengths And Weaknesses: look at questions ...
Other Comments Or Suggestions: look at questions ...
Questions For Authors: Even though the extra dependance on Y in designing the conditioning event makes the framework more flexible, however it highlights the question of how should we design the basis (\phi) in practice? This problem also applies to the works of [Gibbs et al] and [Jung et al] on covariate conditional events. Of course in some case we can design the groups relying on domain knowledge of fairness considerations and so on. But more generally, the question is how do we systematically figure out what kind of conditioning event we have to include or more generally what kind of distribution shifts we have to consider. Now for the case of covariate shifts, there is this option of taking advantage of unlabeled data from the test domain, which might be accessible in some scenarios. But unlabeled data would not be useful to figure out a good basis function beyond covariate shifts. So how can we design a meaningful basis as a function of X and Y?
In thinking about these it might also be good to take a look at some other research directions like [1] which are thinking about this question in the setting of covariate shift, but might give some ideas for the more general case in this paper.
[1] conformal prediction with learned features
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to thank you for your review. We answer your question below.
### 1. “... how should we design the basis (\phi) in practice? … how do we systematically figure out what kind of conditioning event we have to include or more generally what kind of distribution shifts we have to consider … How can we design a meaningful basis as a function of X and Y?”
This is an interesting research direction. Towards this direction, we analyze theoretically and evaluate empirically the application of fractional group membership where we obtain the desired coverage for groups defined by unobserved attributes. For this application in Section 3.1 we define $\Phi(X,Y)$ as the probability that the test point $(X,Y)$ belongs to a group $G$ based on the unobserved attributes conditioned on the value of $(X,Y)$. Our theoretical analysis is for the case where we know the true probabilities that define $\Phi(Χ,Υ)$. In practice, we can use the calibration data that includes the value of the protected attribute to estimate the probabilities for the basis $\Phi$. We implement this approach in our experiments in Section 5. | Summary: The authors build on the rich literature on extensions of the conformal prediction framework, that traditionally yields intervals where marginal probabilistic calibration is true.
More specifically, they propose a method able to infuse very generic notions of group-conditional calibration, moving from previous work covering only more specific cases.
Such method, called Kandinsky Conformal Prediction, is thus proposed, described in detail and then tested on real world tasks.
Claims And Evidence: From a theoretical perspective, Kandinsky CP is fully charachterised by an extensive theoretical analysis, with well proven results.
I am a bit skeptical about the applicative case study, but more on it below.
Methods And Evaluation Criteria: The applicative benchmarks are well chosen and very representative (maybe a bit limited).
Theoretical Claims: I have checked all the proofs in the manuscript, and they seem correct.
Experimental Designs Or Analyses: While I find the theoretical endeavour of sure and clear merit, I remain a bit skeptical about the applicative case studies.
Their results show that Kandinsky CP gives better results than the methodologies already available in the literature.
I believe the paper is lacking in this respect along two directions
-It is unclear why the proposed method performs better, and thus when such an increase in complication is worth from an applicative perspective.
This can be achieved via a more extensive study on real world datasets, in order to charachterise the types of analytical scenarios where Kandinsky beats other methods
-Along this same directions, it would be interesting to charachterise the performance of the method also via the use of synthetic datasets... e.g. I would be interesting to know what degree of overlap between classes would make Kandinsky better than the alternative.
Supplementary Material: I thoroughly read the supplementary material, which seems in good order
Relation To Broader Scientific Literature: As stated previously, this result is generated in the context of the rich body of literature about non-marginal conformal prediction. The authors make a good effort at summarising the fundamental references.
Essential References Not Discussed: All the fundamental literature pieces are there, I would only cite the second edition of Algorithmic Learning in a Random World (same authors, but published in 2023)
Other Strengths And Weaknesses: none of relevance
Other Comments Or Suggestions: none of relevance
Questions For Authors: Your test examples seem to show a supremacy of Kandinsky CP with respect to other methodologies. Do you have any idea why? Is it because of specific features of the dataset, or just because the method is better than the alternatives allround?
Are there applicative situations where Kandinsky CP "breaks down"?
This can be explored also with synthetic datasets.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to thank you for your valuable comments. We address your questions and concerns below.
### 1. “It is unclear why the proposed method performs better…Kandinsky beats other methods.” “Your test examples seem to show… alternatives allround?”
We provide two concrete example scenarios where KCP's generality directly improves coverage:
- **Overlapping Groups:** We consider overlapping subgroups jointly defined by covariates $X$ and labels $Y$. For example, in our experiments with the ACSIncome dataset (income prediction task), groups are naturally overlapping and jointly defined by subsets of demographic attributes and income levels. In such scenarios, standard Mondrian conformal prediction does not apply, as it requires disjoint groups. Furthermore, the conservative baseline method introduced by Barber et al. (2021)—which assigns the most conservative prediction sets to test points based on all overlapping groups to which they belong—tends to produce excessively large and thus impractical prediction sets. Our experiments also validate this result.
- **Latent or Implicit Groups:** KCP effectively handles scenarios involving latent groups or implicit subpopulations defined by attributes unavailable at test time. Previous methods that rely exclusively on explicit group annotations are inadequate for such cases. For example, class-conditional conformal prediction uses only labels $Y$ as explicit group identifiers, making it insufficient for capturing latent groups. In contrast, the Kandinsky framework jointly leverages covariates $X$ and labels $Y$ to infer implicit, probabilistic group memberships. Our experiments with the CivilComments dataset precisely demonstrate this scenario: group annotations are unavailable at test time, and our results show that KCP successfully maintains the desired coverage guarantees, whereas baseline approaches like class-conditional CP fail to do so.
Additionally, we want to emphasize that KCP unifies existing methods (class-conditional, covariate-group-conditional, Mondrian) without increasing complication. It reduces general group-conditional coverage to a single quantile regression over an appropriately chosen function class. Our main contribution rigorously shows that this approach achieves stronger conditional coverage guarantees through a practical and simple algorithm.
### 2. “I would be interested to know what degree of overlap between classes would make Kandinsky better than the alternative. “
In the CivilComments test dataset, the number of groups that a sample belongs to follows the distribution below:
| Number of Groups | Samples |
|-----------|---------------|
| 0 | 81235 |
| 1 | 37226 |
| 2 | 12041 |
| 3 | 2704 |
| 4 | 470 |
| 5 | 88 |
| 6 | 13 |
| 7 | 4 |
| 8 | 1 |
11% samples belong to at least two overlapping groups. Under such a degree of overlapping, Kandinsky achieves half the coverage deviation of the best alternative (the conservative conformal prediction method).
### 3. Are there applicative situations where Kandinsky CP "breaks down"?
We conducted experiments to systematically analyze scenarios where Kandinsky Conformal Prediction (KCP) provides an advantage. Specifically, we explored the effects of varying two critical parameters: the number of groups and the sample size per group. In the ACSIncome dataset, we fixed the number of samples per group while progressively increasing the total number of groups. In the CivilComments dataset, we varied the total sample size while keeping the number of groups constant.
Our results show that KCP scales robustly with an increasing number of groups, whereas baseline methods deteriorate significantly (Fig 1a). Additionally, KCP’s performance consistently improves with larger sample sizes. However, we observed a scenario where KCP underperforms compared to class-conditional conformal prediction when the sample size per group falls below approximately 250 in the CivilComments dataset (Fig 1d). This occurs because class-conditional prediction assumes binary groups defined solely by labels Y, giving it a larger "effective" sample size per group—up to eight times more than KCP, which handles 16 overlapping groups jointly defined by X and Y. Consequently, class-conditional prediction may initially appear competitive at small sample sizes, but its inherent limitations become clear as sample sizes grow. In summary, KCP demonstrates substantial practical advantages, particularly in settings with many groups and moderate to large sample sizes per group.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the insightful answers, which I believe clarify the results of the paper.
I remain not fully satisfied though when it comes to the exploration of the benefits of a Kandinsky method wrt to a more classical one as a function of class overlap.
A theoretical study, or a more extensive theoretical one would have been very interesting from a methodological perspective. | Summary: This paper proposes a new conditional conformal prediction framework named **Kandinsky conformal prediction** , which considers the conditional coverage given the information from both covariate and label. The theoretical results also improved existing bounds.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I have checked the proofs in Section A.1, which are correct to me.
Experimental Designs Or Analyses: Yes. The experiments are solid.
Supplementary Material: No.
Relation To Broader Scientific Literature: The proposed method expands the scope of condition-conformal methods in Jung et al. (2023) and Gibbs et al. (2023).
Essential References Not Discussed: No.
Other Strengths And Weaknesses: **Weaknesses**
1. The applications in Section 3 are not fully explored in experiments.
2. Lack of comparison results with Romano et al. (2020a).
Other Comments Or Suggestions: 1. Add explicit definition of $C(X_{n+1})$ in Algorithm 3.
2. Provide examples or experiments for Fractional group memberships.
3. Discuss the technical reason for improving the convergence rate over Roth (2022); Jung et al. (2023), and the sharper dependence on $|\mathcal{G}|$ compared with Areces et al. (2024).
Questions For Authors: No.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to thank you for your feedback. We address your comments below.
### 1. “Add explicit definition of C(X_{n+1}) in Algorithm 3.”
The definition of $C(X_{n+1})$ is given in the equation of the second line of Algorithm 3, where it includes all $y$ such that the score function on the test point is smaller than a threshold $\hat q_y$. The threshold $\hat q_y$ is defined in the next equation, which is computed by quantile regression on both calibration samples and test samples.
### 2. “Provide examples or experiments for Fractional group memberships.”
The ACSIncome experiment in the paper is designed with fractional group memberships. The dataset is derived from US Census data collected from different states, which are considered as groups. The state attribute is omitted from the covariates and test point labels, leading to fractional group membership: as defined in L220, we consider fractional group membership with groups defined by unobserved attributes, such that the membership can only be inferred by a probabilistic function over $X$ and $Y$. Given the ground-truth group, the membership is always deterministic. But the membership is fractional for the predictor which observes $X$ but does not observe the group.
We propose another example in the introduction (L74). Due to regulatory restrictions, sensitive attributes are sometimes prevented from being used for prediction. Grouping by unobserved sensitive attributes causes fractional membership.
### 3. “Discuss the technical reason for improving the convergence rate over Roth (2022); Jung et al. (2023), and the sharper dependence on $|G|$ compared with Areces et al. (2024).”
**Compared to Roth (2022); Jung et al. (2023):**
We establish a connection between the subgradient of the empirical quantile loss and the empirical coverage guarantee, then prove concentration for the empirical coverage.
Roth (2022) and Jung et al. (2023) connect the subgradient of the expected quantile loss to the expected coverage, so they must prove the concentration for the quantile loss itself. The empirical coverage yields tighter concentration because it is the sum of binary-valued functions. Roth (2022) and Jung et al. (2023) only prove concentration for a regularized quantile loss, and the regularization worsens sample complexity.
**Compared to Areces et al. (2024):**
Areces et al. (2024) prove uniform convergence of the weighted coverage for the whole function class, but we prove uniform convergence only for the basis weight functions since the function class is a vector space. This sharpens the dependence on |G|.
### 4. “Lack of comparison results with Romano et al. (2020a).”
Romano et al. (2020a) consider a different setting from ours. They assume that the group label for the test point is known, while we address the more general case where the group label is latent. Therefore, their method cannot be directly applied in our experiments. | null | null |
Boost-and-Skip: A Simple Guidance-Free Diffusion for Minority Generation | Accept (poster) | Summary: Authors of this work propose a method called "Boost-and-Skip" for generating minority samples from low-density regions in a data manifold using diffusion models. This method relies on two key modifications to the standard denoising process: 1) Initializing the reverse process with a higher variance noise (instead of a standard Gaussian noise), and 2) Skipping the early denoising steps. Unlike existing diffusion-based methods for minority generation, "Boost-and-Skip" does not rely on expensive guidance procedures increasing the efficiency. In addition, authors demonstrate that the proposed method achieves similar performance on the task of minority generation compared with state-of-the-art methods (diffusion-based and others) while maintaining the overall image quality, requiring no additional computations or modules, and being more efficient.
Claims And Evidence: The main claims are that the proposed method is effective at generating minority samples with diffusion models while being more efficient than guidance-based methods. Both the theoretical and empirical results support these claims.
Methods And Evaluation Criteria: Authors followed previous works on the choice of metrics to report. I must admit that I am not an expert on this topic but I believe the reported metrics and evaluations do make sense for the problem of minority generation.
Theoretical Claims: I skimmed over all the equations included in the main paper and did not find any particular errors. However, I note that I did not do a careful read and I am not fully familiar with the background on some of the methodology details.
Experimental Designs Or Analyses: I find the experimental design to be coherent with previous works on the task of minority generation. So, I do not see any particular pitfalls here.
Supplementary Material: I looked at the additional results included in Section D.
Relation To Broader Scientific Literature: The minority generation problem is important on its own as it focuses on making generative models more inclusive and fair. Since the proposed method is a simple modification on top of standard diffusion models, it is a practical solution that has potential to be easily applied to various frameworks. I also really like the fact that the method builds on simple adjustments to existing processes. This can show that significant improvements are possible by minor smart changes and can encourage the research community to look for simple and efficient solutions for critical and challenging problems such as fairness.
Essential References Not Discussed: No. I found the related work section inclusive.
Other Strengths And Weaknesses: Strengths:
1. Simple methodology that builds on two small modifications to the diffusion process for improving minority generation.
2. Efficient method that does not require additional components or training.
3. Achiving SOTA performance while reducing the inference time (as compared to SOTA methods).
4. Good write-up quality and ease of reading.
Weaknesses:
1. One of the main limitations of this work stems from its relies on the backbone diffusion model's exisiting biases. If the model has severe biases, then Boost-and-Skip will have limited affectiveness.
2. In addition, as discussed in Table 2, hyperparameters significantly influence the quality of generated samples. This suggests that effectiveness of the proposed method highly depends on a detailed exploration over the hyperparameter values, limiting the method's reliability.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Can you discuss your point of view on the weaknesses I listed above and if and how these concerns can be addressed?
2. As can be seen the additional results included in Section D of appendix, it seems like the quality of generated samples is degraded when adjusting the base diffusion model for improved minority generation. For example, in Figure 8, it is easier to tell that samples in b and c are generated while samples in a are more realistic. Or in Figure 11, some samples are very weird-looking (humans in column c). I wonder if you know how badly can the sample quality be affected by improving the minority generation?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We greatly appreciate Reviewer o8Xg for the strong acceptance and thoughtful feedback. Below, we provide detailed point-by-point responses to address your remaining concerns.
---
> **1. [o8Xg] questioned the effectiveness of our approach on highly biased datasets.**
To address your concern, we consider CIFAR10-LT, a highly-imbalanced version of CIFAR-10, and investigate the performance benefit of ours. See the table below for the results.
| Method | cFID | sFID | Prec | Rec |
|---------|-------|-------|-----------|--------|
| **DDPM** | 75.71 | 44.26 | 0.95 | 0.23 |
| **CBGAN**| 78.62 | 43.76 | **0.99** | 0.08 |
| **BnS** | **70.12** | **43.73** | 0.91 | **0.34** |
"CBGAN" refers to a class-balancing GAN approach that implements minority generation with minority conditional labels [1]. Similar to the experiments in our paper, we used real minority data from CIFAR10-LT as the reference for calculating the reported metrics. Observe that B&S outperforms the considered baselines (including the GAN-based approach in [1]) under this highly-biased benchmark, further demonstrating the robustness of our framework.
---
> **2. [o8Xg] expressed concerns regarding the sensitivity to hyperparameters.**
Although we acknowledge that our framework may be sensitive to hyperparameters — particularly $\gamma$ — we provide a practical heuristic for their selection in Appendix C (Lines 1250–1251), which streamlines the process of choosing both $\gamma$ and $\Delta_t$. Leveraging this approach, a simple one-dimensional grid search over $\gamma$ is sufficient to identify effective hyperparameters (see Lines 1267–1270 for details).
We also highlight that the implementation complexity of our method is significantly lower than that of existing guided minority samplers, which often involve numerous design choices. For example, the approach in [2] requires training two separate classifiers with many design options (e.g., classifier architectures), while the method in [3] necessitates the selection of six hyperparameters. In contrast, our framework only requires choosing two hyperparameters, providing substantial practical benefits over the guided minority samplers in [2,3].
---
> **3. [o8Xg] expressed concerns regarding the visual quality of minority samples.**
We believe there are largely two reasons behind the quality degradation mentioned by the reviewer. First, there is a general trade-off between minority sampling performance and image quality. Second, the base diffusion model itself lacks the ability to generate minority features. We provide further explanation for each hypothesis below.
To investigate the first hypothesis, since FID and Precision are measured with respect to ground-truth minority samples, we can use FID as a proxy for how close the generated distribution is to the distribution of minority data, and Precision as a proxy for the quality of generated minority samples. In Fig. 2 (provided in the link below), we observe there is generally a trade-off between the two quantities, and B&S provides competitive trade-off performance while achieving a dramatic reduction in inference cost compared to guidance-based minority methods. The reviewer is also directed to Table 2 (c), where we again observe FID vs. Precision trade-off as we adjust boosting strength in B&S.
- Link to Fig. 2: https://docs.google.com/presentation/d/1dsMx8s5kJikQnjv6IQNvk-UyC_DgfU9xJpF-ZkaLweI/edit?usp=sharing
Also, we hypothesize that in some cases, the base diffusion model may lack the capability to synthesize minority features. For instance, on ImageNet, it is well-known that generating human faces is challenging; see Fig. 7 in [4], Fig. 15 in [5], and Fig. 5 (b) in [2]. We believe that is why human faces in Fig. 11 (c) synthesized by B&S appear unnatural and distorted. Using a better base diffusion model may yield more realistic minority samples.
---
**References**
[1] Class Balancing GAN with a Classifier in the Loop, UAI 2021
[2] Generating High Fidelity Data from Low-density Regions using Diffusion Models, CVPR 2022
[3] Self-Guided Generation of Minority Samples Using Diffusion Models, ECCV 2024
[4] Large Scale GAN Training for High Fidelity Natural Image Synthesis, ICLR 2019
[5] Diffusion Models Beat GANs on Image Synthesis, NeurIPS 2021
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications. I find the rebuttal effort by the authors helpful to answer my concerns and questions regarding the hyperparameter sensitivity and sample quality. I am happy to keep my original score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your continued strong support and for acknowledging that our rebuttal addressed your concerns. We greatly appreciate your thoughtful evaluation and the time you dedicated to reviewing our work. | Summary: The paper proposes Boost-and-Skip, a method to generate low-density, minority samples. The method has two straightforward yet effective modifications to standard diffusion models: (1) variance-boosted initialization, and (ii) timestep skipping during the generative process. The authors provide intuitions, theoretical analysis, and synthetic experiments to motivate these two modifications. Empirically, they show that their method achieves competitive performance compared to state-of-the-art guidance-based methods but at significantly lower computational costs.
Claims And Evidence: The claims made by the authors (i.e., variance-boosted initialization and timestep skipping) are clearly stated and supported by experiments.
Methods And Evaluation Criteria: The proposed variance-boosted initialization and timestep-skipping are straightforward and intuitive. The evaluation criteria (e.g., cFID) and datasets (e.g., CelebA, ImageNet) are standard and appropriate for assessing minority generation performance.
Theoretical Claims: The paper includes theoretical claims concerning the properties of their method. I checked the claims and found that they help me better understand the intuition behind the method; however, I didn't check the correctness of the claims.
Experimental Designs Or Analyses: The experimental design is rigorous. The ablation studies demonstrate the necessity and impact of each component of the proposed approach (variance-boosting and timestep skipping).
Supplementary Material: I reviewed supplementary material.
Relation To Broader Scientific Literature: The paper is well related to previous works on diffusion models and minority generation. It clearly addresses the efficiency limitation of existing guidance-based approaches and provides an effective alternative.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback and for considering our work for acceptance. We appreciate your time and evaluation. If you have any further suggestions or questions, feel free to let us know. We are more than happy to address any additional points or provide further clarifications as needed. | Summary: The paper proposes an approach called Boost-and-Skip for generating minority samples using diffusion models. Specifically, it begins stochastic generation with variance-boosted noise to encourage initializations in low-density regions. It then skips several of the earliest timesteps to further amplify the impact of low-density initialization. The effectiveness of Boost-and-Skip is supported by both theoretical and empirical evidence, with the added advantage of low computational cost.
Claims And Evidence: The claims are clear and well supported by theoretical and empirical evidence.
Methods And Evaluation Criteria: The proposed Boost-and-Skip is quite simple, yet its advantage lies in its rigorous theoretical support.
Some issues:
1. I feel that the proposed method is sensitive to hyperparameters, requiring a wide range of grid searches. Moreover, the optimal hyperparameter settings vary significantly across different datasets. Table 2(c) presents the substantial differences in results under different hyperparameter settings.
2. The authors consider baselines beyond diffusion models. I think some GAN works specifically focused on minority generation should be considered, rather than general-purpose GAN works.
Theoretical Claims: I did not check the proof very carefully. But their theoretical claims look reasonable.
Experimental Designs Or Analyses: The empirical study is extensive, covering multiple benchmark datasets and baselines.
Supplementary Material: I have skimmed through all parts of the supplementary material.
Relation To Broader Scientific Literature: The paper proposes a simple approach for generating minority samples using diffusion models, achieving performance comparable to state-of-the-art baselines with lower complexity.
Essential References Not Discussed: The literature on conditional diffusion models should be reviewed in the main text of the paper, as this is another important methodological branch for minority generation.
Other Strengths And Weaknesses: The writing in this paper is clear and easy to follow.
Other Comments Or Suggestions: N.A.
Questions For Authors: 1. From Fig. 7, it appears the proposed method would generate OOD samples (original density0). This could lead to an inaccurate learned distribution. How do you explain this limitation?
2. I think the learned distribution for the toy data (Fig. 2) can be plotted to evaluate how well it corresponds to the theoretical result in Proposition 3.2.
3. I wonder about the effectiveness of the proposed method on extremely imbalanced problems. The setup in Figure 5 does not clearly demonstrate this. I suggest, for example, using the CelebA dataset with 99% young and 1% old faces. Additionally, a comparison with conditional diffusion models, given known conditional labels, should be included.
4. The sensitivity to hyperparameters (Methods And Evaluation Criteria).
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer for your detailed comments and valuable suggestions. Below, we provide thorough point-by-point responses to address your concerns.
---
> **1. [7UiG] expressed a concern on the sensitivity to hyperparameters.**
Please refer to our response to Reviewer o8Xg (the second bullet point).
---
> **2. [7UiG] suggested comparisons with GAN-based minority generation frameworks.**
To reflect your comment, we have conducted new experiments to compare our method with GAN-based minority generation frameworks. Specifically, we evaluate a class-balancing GAN approach [1] and compare its performance in generating minority samples with ours on CIFAR10-LT (a long-tailed version of CIFAR-10). See the table below for the results.
| Method | cFID | sFID | Prec | Rec |
|---------|-------|-------|-----------|--------|
| **DDPM** | 75.71 | 44.26 | 0.95 | 0.23 |
| **CBGAN**| 78.62 | 43.76 | **0.99** | 0.08 |
| **BnS** | **70.12** | **43.73** | 0.91 | **0.34** |
As in the experiments presented in our paper, we used real minority data (from CIFAR10-LT) as the reference for computing the metrics. Observe that B&S outperforms the GAN-based approach in [1] (i.e., CBGAN) even under this highly-biased benchmark, further highlighting its effectiveness as a minority generator.
---
> **3. [7UiG] pointed out that literature reviews on minority-conditional diffusion models are missing.**
We kindly remind the reviewer that approaches with minority-conditional diffusion models are discussed in the related work section in Appendix A.1. We will move them to the main body in our revision.
---
> **4. Clarifications on Fig. 7.**
We kindly remind the reviewer that the focus of minority generation is not to replicate the training data distribution but to intentionally bias generation toward minority instances, which are defined as low-density on-manifold samples. In this regard, Fig. 7 demonstrates that our framework effectively achieves this goal, as the high-valued neighborhood metric values imply the generation of low-density instances.
While the reviewer might be concerned that high neighborhood metric values indicate the presence of off-manifold (i.e., OOD) samples of poor quality, we emphasize that our method does not generate more OOD samples than state-of-the-art minority samplers such as Minority Guidance [2]. This is evidenced by our superior FID scores compared to Minority Guidance (e.g., in Table 1), where FID is computed using real minority (i.e., on-manifold, low-density) data.
---
> **5. [7UiG] suggested a sanity check of Proposition 3.2 using the toy data in Fig. 2.**
Per your suggestion, we did an experiment for the sanity check in Fig. 1 (provided in the link below). Specifically, we plot generated data variance as a function of initial Gaussian noise variance for the two rings example. Blue dotted line denotes generated data variance predicted by theory, and the orange solid line illustrates actual generated data variance. The general trend of the two curves are similar, validating the ability of B&S to generate minority samples. We conjecture that the offset between theory and practice may arise from the score estimation error, as we use learned scores rather than exact scores to simulate B&S.
- Link to Fig. 1: https://docs.google.com/presentation/d/1GCZKcxbX_A7e_v_ckVCefcVxGsbm2UkZSxyUc5zuSbA/edit?usp=sharing
---
> **6. [7UiG] questioned the effectiveness of our approach on highly imbalanced benchmarks.**
To address your question, we consider CIFAR10-LT, a highly-imbalanced version of CIFAR-10, and explore the performance benefit of ours. We found that B&S yields improved minority generation even under this biased setting; see the table above (included in the second bullet point of this response) for detailed results.
---
> **7. [7UiG] suggested comparisons with minority-conditional diffusion frameworks.**
We gently remind the reviewer that our experiments already encompass such approaches, by incorporating ADM-ML [2] - a classifier-guided diffusion sampler conditioned on known minority labels in CelebA. See details in Table 1.
---
**References**
[1] Class Balancing GAN with a Classifier in the Loop, UAI 2021
[2] Don’t Play Favorites: Minority Guidance for Diffusion Models, ICLR 2024
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed responses. My concerns have been addressed.
Just a small point—the comparison of conditional diffusion I mentioned refers to the class-free version. Anyway, I vote for the acceptance of this work after the rebuttal.
---
Reply to Comment 1.1.1:
Comment: Thank you for raising the score. We are pleased that our rebuttal successfully addressed your previous concerns. Per your suggestion, we will include more comparisons with conditional diffusion frameworks, including some classifier-free versions. | Summary: The paper provides two techniques for improving minority sampling. The first technique is a boost, which initialises the sampling with controllable variance. The second technique is skip where it will skip several sampling timesteps. The authors claim to achieve better performance in generating minority samples.
Claims And Evidence: The evidence for achieving a minority sample by the proposed method is not very clear both qualitatively and quantitatively
Methods And Evaluation Criteria: 1. The method is straightforward and intuitive
2. The improvement is mainly compared with Temperature sampling. Most of the time, the performance is poorer on the baselines (Table 1). Please add more explanation
3. It is hard to judge Figure 5. Can not tell why boost-and-skip provides better minority samples
4. Lack of clear metrics to measure minority samples. In the paper, the authors mention AvgkNN, LOF and Rarity Score, yet all the tables do not have these figures.
Theoretical Claims: I checked the theoretical claims but not sure if the proofs are correct.
Experimental Designs Or Analyses: experimental designs are okey
Supplementary Material: I checked the supplementary for Proof and Implementation details
Relation To Broader Scientific Literature: n/a
Essential References Not Discussed: The work does not compare with guidance methods to show why this method is better than guidance.
Other Strengths And Weaknesses: n/a
Other Comments Or Suggestions: n/a
Questions For Authors: 1. The improvement is mainly compared with Temperature sampling. Most of the time, the performance is poorer on the baselines (Table 1). Please add more explanation
2. The work does not compare with guidance methods to show why this method is better than guidance. Please include more comparisons.
3. Add more metrics about the minority sample. Current metrics in the paper does not show how minority samples are covered.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank Reviewer bs5F for the constructive feedback. Below we provide point-by-point responses on your questions and concerns.
---
> **1. [bs5F] expressed a concern that the performance is often limited compared to baselines.**
We note that the superior baselines in Table 1 (e.g., [1,2]) correspond to guidance-based minority approaches that employ guidance terms to direct inference toward low-density regions. While we acknowledge that our suboptimal performance may stem from the lack of such explicit guidance for minority generation, our method significantly advances the Pareto frontier in the performance-complexity tradeoff (see Fig. 1). In particular, our framework delivers notable computational benefits over the guided minority approaches. For example, on ImageNet-64, our method achieves **65% reduction in wall-clock time and 4.5× lower peak memory usage** compared to the current state-of-the-art [2] (see Table 3).
---
> **2. Clarifications on Fig. 5.**
We would like to assure the reviewer that the visual attributes of our samples in Fig. 5 capture distinctive features of minority instances. For instance, jack-o’-lantern images with bright surroundings (rather than the typical dark Halloween atmosphere) are considered as low-density features of the class [3]. Also, our eagle-class images in the same figure exhibit more intricate visual details compared to the baselines, which are also known as minority features [1,2].
---
> **3. [bs5F] noted missing neighborhood metrics like AvgkNN.**
As noted in L356-358 (right column), the evaluation results using AvgkNN, LOF, and Rarity Score are provided in Appendix D.1, where B&S performs consistently well across all three metrics, rivaling the state-of-the-art guided minority sampler [2]. See Fig. 7 therein for explicit details.
---
> **4. [bs5F] noted missing comparisons with guidance-based minority methods.**
We would like to gently remind the reviewer that, in Tables 1,3,4, we already compare B&S with guidance-based methods such as ADM-ML [1], Minority Guidance [1], and Self-guidance for minority generation [2].
---
**References**
[1] Don’t Play Favorites: Minority Guidance for Diffusion Models, ICLR 2024
[2] Self-Guided Generation of Minority Samples Using Diffusion Models, ECCV 2024
[3] Generating High-Fidelity Data from Low-Density Regions Using Diffusion Models, CVPR 2022 | null | null | null | null | null | null |
Beyond Confidence: Exploiting Homogeneous Pattern for Semi-Supervised Semantic Segmentation | Accept (poster) | Summary: This article proposes a new metric, AgScore, for pseudo label filtering. It measures the accuracy of pseudo-labels by evaluating the similarity between a pixel's embedding and positive pixel embeddings, as well as the dissimilarity between it and negative pixel embeddings. This method can be integrated as a universal plugin into existing SSL frameworks and improves performance on various baseline models.
Claims And Evidence: The empirical results and theory of this article reflect that AgScore is a better indicator for filtering pseudo labels.
But this paper fails to point out the reason why AgScore is superior to the vanilla confidence.
Methods And Evaluation Criteria: Yes I did.
Theoretical Claims: Yes I did.
Experimental Designs Or Analyses: Yes I did.
Supplementary Material: Yes I did.
Relation To Broader Scientific Literature: The contribution of this paper lies in the more accurate indicator for filtering pseudo labels. However, there are two main issues.
1. Why is the AgScore better than vanilla confidence? The theoretical proof only indicates the rationality of AgScore, but cannot prove how it is better than existing metrics.
2. Due to the emergence of the Universal Visual Large Models (SAM series), by 2025, the SSL task will no longer be challenging and have practical value.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Advantage:
1. promising performances
2. clear written
disadvantage:
1. Due to Universal Visual Large Model, this task is not as valuable as before.
2. The reason why AgScore can defeat vanilla confidence is not clear.
Other Comments Or Suggestions: None.
Questions For Authors: Though I can find some technological differences, the design of the AgScore has similarity to some existing works (e.g., RankMatch[1]) that leverage inter-pixel relationships. Would you like me to elaborate on the essential distinctions between them?
For other questions, please see the Strengths and Weaknesses part.
[1] Zhang Z, Chen W, Fang C, et al. Rankmatch: Fostering confidence and consistency in learning with noisy labels[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2023: 1644-1654.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for taking the time to share your comments in the review assessment. We provide a detailed point-by-point response to your comments.
Note that the following **link** refers to https://anonymous.4open.science/r/AgScore/Rebuttal.pdf
---
**Q1**: Advantages of AgScore.
**A1**: Semi-supervised semantic segmentation is essentially a learning strategy-centric task, and its core lies in effectively filtering pseudo-labels to explore unlabeled data.
- Previous **confidence-based methods tends not to be preferred ascribed to the trade-off** between TPR and FPR when handling pseudo-labels. A high confidence threshold ensures pseudo-label quality (low FPR) but discards many correct pseudo-labels with lower confidence (unfavorably low TPR), and vice versa.
- We focus on the homogeneous pattern in the embedding space. Intuitively, **the high-dimensional, sophisticated embedding space has greater potential for assessing pseudo-label reliability than the relatively simple one-dimensional confidence metric from the prediction space**. This provides hints for assessing pseudo-label reliability in the embedding space.
- In implementation, we absorb the merits of confidence to construct clean positive and negative agents. For pixel predictions difficult for confidence to handle, we employ the agent score function to score pseudo-labels in collaboration with clean positive/negative agents, **leveraging the embedding space's capability without sacrificing confidence's advantages, yielding higher TPR and lower FPR**.
Experimentally:
- Fig. 1 (d) shows that AgScore achieves a better TPR-FPR balance over confidence.
- Fig. 3 depicts the better TPR and FPR dynamics of AgScore than confidence-based methods during training.
These experimental results, along with the superior segmentation performance (Tab. 1-4), provide strong evidence that AgScore reaches new heights in reliable pseudo label filtering beyond confidence.
Furthermore, we attempt to analyze from the perspective of information theory. We prove that the mutual information between the pseudo labels selected by AgScore and the ground truth is greater than that of regular confidence-based methods (**Theorem G.3 in link**). Therefore, AgScore theoretically improves pseudo label selection compared to the confidence metric.
In summary, from the perspectives of **intuitive motivation**, **experimental analysis**, and **theoretical justification**, we demonstrate the advantages of AgScore over confidence-based methods.
---
**Q2**: About SAM.
**A2**: We respectfully disagree with your viewpoint.
- Semi-supervised learning focuses on better exploring vast amounts of unlabeled data under extremely limited labeled data.
- In fact, even for powerful visual foundation models like SAM, they **struggle to generalize to domain-specific scenarios**, such as medical images, requiring fine-tuning to adapt to downstream tasks. In these tasks, obtaining sufficient annotations is challenging and time-consuming, making it highly valuable to explore how SAM can leverage easily accessible unlabeled data.
To further validate the value of SSL, we evaluate AgScore with SAM under the extreme 1-labeled case. As shown in **link** Tab. 8, we find: (1) The original SAM model struggles under the extreme 1-shot setting, underscoring the challenge of domain transfer for large vision models. (2) Fine-tuning SAM with a confidence-based pseudo-labeling strategy significantly boosts performance, demonstrating the value of SSL for adapting foundation models to specialized tasks. (3) Integrating AgScore into the SAM fine-tuning pipeline further improves results, validating the effectiveness of AgScore even with powerful vision backbones.
---
**Q3**: About RankMatch.
**A3**: Thanks for pointing this out. RankMatch proposes a sample selection strategy via a confidence voting scheme in the embedding space to increase sample selection quantity for learning with noisy labels.
However, we argue that a essential distinction is that **RankMatch focuses solely on positive samples** , neglecting the utilization of negative samples. Unlike RankMatch, **AgScore combines knowledge from both positive and negative samples** by examining the difference in embedding similarity between the candidate pseudo-label's corresponding pixel and the positive/negative agents to measure reliability, thus better leveraging the capabilities of the embedding space. Moreover, we provide a theoretical justification for AgScore's working mechanism.
Furthermore, we construct an ablation study in **link** Tab. 9 to verify the impact of considering only positive agents (1-st entry), demonstrating that the introduction of negative agents improves performance.
---
We hope our response can resolve your concern. Please do not hesitate to let us know if you have further questions :)
Thanks for your time and consideration. Have a wonderful day!
[1] How to Efficiently Adapt Large Segmentation Model (SAM) to Medical Images.
---
Rebuttal Comment 1.1:
Comment: Thank the author for the thoughtful and detailed rebuttal. The reviewer sincerely appreciates the time and effort the authors put into addressing the concerns. After carefully reviewing your responses, The reviewer is pleased to see that all of doubts have been resolved.
Thanks for the authors' thorough and detailed rebuttal. After reviewing the authors' response, I believe it has addressed some of my doubts. However, I still have a main concern:
About the SAM part.
I agree with the author that VLMs like SAM struggle to generalize to domain-specific scenarios. Hence, I admit that semi-supervised learning, weakly-supervised learning, unsupervised learning, and domain-adaption are valuable in annotation-sparse scenes like medical images, remote sensing images, etc. But in the current scene understanding of natural images, these methods are no longer the most advanced mainstream solutions. Thus, the SSL segmentation in natural images is not a valuable task for me.
Overall, I think this paper gives some methodological insight, thus I will improve my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer JJYk,
Many thanks for your score improvement and your positive feedback! Your comments complete our paper and make it better!
Regarding your question, we are happy to engage in further discussion.
We agree that vision foundation models like SAM have advanced the segmentation field. However, prompt-based SAM may not be well-suited to directly handle real-world scene understanding applications in natural domain (e.g., autonomous driving) without fine-tuning, due to the following reasons:
- Distribution shift and input perturbations: Vision foundation models, including SAM, are known to *exhibit vulnerability* to real-world distribution shifts, such as compression-induced corruptions [1]. These may arise from varying JPEG compression rates depending on the imaging devices used, which is common in natural image datasets. Such sensitivity makes it difficult to generalize to the specific natural image application without fine-tuning.
- Semantic limitations of SA-1B pretraining: SAM is pre-trained on class-agnostic datasets (SA-1B) and produces segmentations without semantic understanding. However, many natural image applications *require semantic awareness* segmentation of multiple categories simultaneously, which cannot be directly achieved by prompt-based SAM without further adaptation.
Recently, some works [2, 3] have attempted to fine-tune SAM to adapt to real-world applications by composing large-scale, richly annotated datasets that separately target the two aspects discussed above, to improve its performance in the natural domain. In such scenarios, obtaining high-quality, pixel-level annotations remains costly and time-consuming. This aligns well with the strategy-centric goal of semi-supervised learning (SSL): to effectively utilize abundant unlabeled data (scaling the dataset to a larger scale) to enhance model adaptation and improve segmentation quality.
We evaluate AgScore on standard benchmarks primarily to ensure fair comparison with a wide range of recent well-established SSL methods. We sincerely appreciate your comment, and we will actively pursue this direction by establishing SSL benchmarks tailored for foundation model settings, which we believe will further expand the impact and applicability of the SSL research community.
Once again, thank you for your support of our work! If you have further questions, we would be happy to continue the conversation — after all, exchanging ideas is always *valuable*, regardless of the domain :)
[1] Robustness Analysis on Foundational Segmentation Models.
[2] Segment Anything in High Quality.
[3] Semantic-SAM: Segment and Recognize Anything at Any Granularity. | Summary: This paper focuses on the confidence-based scoring functions in the semi-supervised semantic segmentation task. An agent construction strategy, aka., AgScore, is proposed to build clean sets of correct and incorrect pseudo labels. Experiments on three datasets show performance improvements in semi-supervised semantic segmentation.
Claims And Evidence: The motivations from the "homogeneous pattern" is clearly shown in Fig.1.
Why the concept o Agent introduced here? It is not clear what is the relationships between "Agent" and the pseudo labels. The reviewer believes the name is somehow misleading.
Methods And Evaluation Criteria: The manuscript includes a substantial amount of theoretical analysis that appears to be tangential to the core focus of the study. Specifically, the exact operational mechanism of the 'agent score' metric remain unclear. The reviewer finds it challenging to identify how this metric is computed or applied in practice.
Theoretical Claims: 1. In the theoretical analysis, please discuss the independence assumption of X and Y and analyze its possible impact.
2. Suggestions: the theoretical analysis should supplement the discussion of the problem that the increase of negative sample agents may lead to semantic overlap, and analyzes its impact on separability.
Experimental Designs Or Analyses: The experimental results mainly focus on the performance, while neglecting the training and evaluation efficiency, there should be a table or a subsection to discuss about this.
Supplementary Material: The additional results and the algorithms help make the idea clear.
Relation To Broader Scientific Literature: A new perspective on the pseudo label selection in semi-supervised semantic segmentation.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: - **Formula Derivation**:
- By analyzing the distribution of \( Z = \frac{Y}{X} \), it is proven that increasing the number of negative agent samples \( M \) can improve the separability between correct and incorrect pseudo-labels.
- The relationship between FPR (False Positive Rate) and \( M \) is derived, proving that \(\frac{\partial FPR}{\partial M} < 0\).
- **Issues**:
- The derivation assumes \( p_1 > p_2 \) and \( q_1 < q_2 \), but no experimental or theoretical support is provided to justify these assumptions.
- The derivation does not consider that increasing \( M \) may lead to semantic overlap among negative agent samples, thereby affecting separability. This point is mentioned in the experimental section but is not discussed in detail in the theoretical analysis.
Other Comments Or Suggestions: There can be some visualization results to prove the effectiveness of the proposed method, such as TSNE, etc.
Questions For Authors: 1. **Training and Evaluation Efficiency**:
- The experimental results primarily focus on performance metrics. Could the authors provide a table or subsection discussing the training and evaluation efficiency of the proposed method? This would help readers understand the computational cost and scalability of the approach.
2. **Assumptions in Theoretical Analysis**:
- The derivation assumes \( p_1 > p_2 \) and \( q_1 < q_2 \). Could the authors provide experimental or theoretical evidence to support these assumptions? This would strengthen the validity of the theoretical analysis.
3. **Semantic Overlap in Negative Agents**:
- The derivation does not consider the potential semantic overlap among negative agent samples when \( M \) is increased. Could the authors elaborate on how this might affect the separability and overall performance of the method?
4. **Visualization of Results**:
- Could the authors include visualization results, such as t-SNE plots, to demonstrate the effectiveness of the proposed method? Visual evidence could provide additional insights into how the method distinguishes between correct and incorrect pseudo-labels.
5. **Impact of Negative Agent Selection**:
- How does the selection strategy for negative agents (e.g., orthogonal selection) impact the overall performance? Could the authors provide a more detailed analysis or comparison with other selection strategies?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for taking the time to share your comments in the review assessment, as well as for acknowledging the **new perspective**, **well-supported motivation** and **clear idea**. We provide a detailed point-by-point response to your comments.
Note that the following **link** refers to
https://anonymous.4open.science/r/AgScore/Rebuttal.pdf
---
**Q1**: Concept of Agent.
**A1**: Sorry for not providing you with an intuitive understanding of the term "agent". In our work, "agents" refer to the sets of pixels that are considered to have correct or incorrect pseudo-labels, serving as a **bridge** for evaluating the correctness of the pseudo-labels for the remaining pixels. This is why "agents" are named.
---
**Q2**: Independence Assumption of X and Y.
**A2**: We assume that X and Y are independent for the convenience of subsequent theoretical derivations. In practice, we first select the top-1% and bottom-1% confidence pixels and then randomly/orthogonally sample positive/negative agents from them. This is done to **ensure that the independence assumption** holds to the greatest extent. To make the theoretical derivation more general, we even consider the case where the sampled agents are not identically distributed, i.e., the Poisson binomial distribution (Sec. C in the Appendix). Overall, our assumption is mild and supported by implementation.
---
**Q3**: Value of our Theoretical Analysis.
**A3**: Our theoretical analysis is closely intertwined with our method and experiments. As mentioned above, the agent selection strategy employed in our method is designed to ensure the independence assumption in the theoretical analysis. Moreover, the conclusion drawn from our theoretical analysis, "increasing M enhances the separability between correct and incorrect pseudo-labeled pixels," is also validated by Tab. 5 in our experiments. At the same time, the experiments also demonstrate that when M becomes excessively large, the performance degrade, since the independence assumption breaks down.
---
**Q4**: Training and Evaluation Efficiency.
**A4**: Thanks for your valuable suggestion. Notably, our AgScore is only involved in the selection of pseudo-labels during training and **does not introduce any additional cost during evaluation**. To further quantify the efficiency impact of AgScore, we report the GPU memory and training time of AgScore and baseline UniMatch under the same setting (Pascal VOC classic, 92 partition, cropsize=513, batchsize=8, ResNet50).
| | mIoU | GPU Memory (G) | Training Time (h) |
| :------: | :--: | :------------: | :---------------: |
| UniMatch | 67.4 | 44.2 | 23.6 |
| AgScore | 69.4 | 46.9 | 25.3 |
We observe that AgScore brings a significant performance improvement at the cost of a slightly increased memory consumption and an acceptable increase in training time.
---
**Q5**: Assumptions in Theoretical Analysis.
**A5**: As shown in Fig. 1(c), we compute the average similarity between pixels corresponding to correct and incorrect pseudo-labels. It is evident that the similarity between pixels with correct pseudo-labels is greater than that between pixels with correct and incorrect pseudo-labels, i.e., $p_1 > p_2$ holds statistically. Similarly, $q_1 < q_2$ also holds statistically.
---
**Q6**: Semantic Overlap in Negative Agents.
**A6**: As you mentioned, when M becomes sufficiently large, there will be semantic overlap among negative agents. In this case, the most harmful negative agents are those that have semantic overlap with positive agents, as this violates the assumption that $p_1 > p_2$. Consequently, $\mu_1 - \mu_2$ increases, leading to a larger $\text{FPR}_\lambda$ (as shown in Lemma 4.3), which in turn degrades the separability between pixels with correct and incorrect pseudo-labels.
---
**Q7**: More Visualization of Results.
**A7**: As shown in **link** Fig. 4 and 5 , we supplement additional t-SNE visualizations for more classes on the Pascal and Cityscapes datasets. It can be observed that all classes clearly exhibit the Homogeneous Pattern. Additionally, as shown in **link** Fig. 7, the precision of the positive and negative agents obtained using simple top/bottom confidence is satisfactory. Therefore, leveraging relatively clean positive and negative agents, we are able to distinguish between correct and incorrect pseudo-labels.
---
**Q8**: Impact of Negative Agent Selection.
**A8**: In Tab. 6 of Appendix, we explore various strategies for negative agent selection strategy, including "Uniform" represents uniformly sampling negative agents from the candidate set, "Bottom" represents sampling negative agents in ascending order of confidence. We observe that our strategy of "Orthogonal" achieves the best results with a light computational cost.
---
We hope our response can resolve your concern. Please do not hesitate to let us know if you have further questions :)
---
Rebuttal Comment 1.1:
Comment: Thank the author for the thoughtful and detailed rebuttal. The reviewer sincerely appreciate the time and effort authors put into addressing the concerns. After carefully reviewing your responses, The reviewer is pleased to see that all of doubts have been resolved.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer bo7S,
We sincerely appreciate your endorsement of our work and your positive feedback! We will make every effort to further improve our work!
Authors | Summary: This study focuses on semi-supervised semantic segmentation which struggles to effectively use unlabeled data due to challenges in balancing true and false positives when filtering pseudo labels. This paper introduces an agent construction strategy and the Agent Score function (AgScore) to better identify correct and incorrect pseudo labels by leveraging patterns in the embedding space. AgScore is theoretically analyzed and shown to enhance segmentation performance across multiple frameworks and datasets, with code and models provided for future research.
Claims And Evidence: No. The following claims made in the submission are not supported by clear and convincing evidence,
- Claim1: "pixels belonging to similar patterns tend to share homogeneous semantics compared to different patterns.“ Reason: The author uses the example in Figure 1(b) to demonstrate the homogeneous pattern, but this example is not representative and is likely the result of careful selection. More categories and a more comprehensive demonstration are needed to substantiate this claim. Moreover, although the features are distinctive, it is unconvincing that they are all classified as bicycles. The author needs to visualize the features of all relevant categories in the dataset together to enable reviewers to better assess whether Figure 1(b) is reasonable.
- Claim2: "However, this scoring function tends not to be preferred ascribed to the inherent trade-off between the true positive rate (TPR) and false positive rate (FPR), as illustrated in Figure 1 (a)." Reason: It is unclear how Figure 1(a) was generated.
- Claim3: "The results indicate that for any given pixel, there exists a higher probability of being correctly predicted if it exhibits a higher similarity to the set of correct pseudo labels compared to the set of incorrect pseudo labels." Reason: The same as Claim1, the evidence from Figure 1 is not convincing.
Methods And Evaluation Criteria: This paper introduces an agent construction strategy and the Agent Score function (AgScore) to better identify correct and incorrect pseudo labels by leveraging patterns in the embedding space. However, the experiments lack sufficiently comprehensive ablation studies to evaluate the proposed method. For example,
- Why "we randomly select N pixels from the top-1% % confidence pixels in F in a class-balanced manner"? Any experiments to show the effectiveness of this method? How would the method handle a long-tailed distribution dataset? Additionally, is it possible to leverage the known ground truth (GT) data to construct this? I believe this part lacks corresponding experiments.
- Also, the design of section 3.3 lacks corresponding experiments.
- Is the designed orthogonal selection strategy reasonable, and would different training methods affect this approach (e.g., using different loss functions or different pixel category determination algorithms)?
- Whether the pixel selection strategy in the "Agent Construction" section is truly reasonable needs to be validated through more quantitative and qualitative experiments. For example, how do the distributions of predictions and ground truth (GT) in the top-1% and bottom-1% compare, and is there a difference compared to the top-2%? Is it necessary to rely on different datasets to rigorously control this parameter, and so on?
Theoretical Claims: I have reviewed it, and while there are no major issues, it seems that these validations do not address the core problems present in the paper.
Experimental Designs Or Analyses: Refer to "Methods And Evaluation Criteria".
Supplementary Material: Yes, I have reviewed all the supplementary materials but did not find the ablation studies I was expecting. Additionally, there are some typos, such as obvious errors in the data presented in Table 4.
Relation To Broader Scientific Literature: It is uncertain, as it is unclear whether the algorithm has limitations, such as being applicable only to specific datasets or particular domains.
Essential References Not Discussed: Perhaps, adding some more recent works in the related work section and including a dedicated section to discuss approaches for better pseudo-label planning would be beneficial.
Other Strengths And Weaknesses: Algorithm 1 and Algorithm 2 can be fully merged.
Other Comments Or Suggestions: There are basically no typos in the paper that affect reading comprehension.
Questions For Authors: Refer to "Methods And Evaluation Criteria" and "Claims And Evidence".
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for taking the time to share your comments in the review assessment. We provide a detailed point-by-point response to your comments.
**link** refers to https://anonymous.4open.science/r/AgScore/Rebuttal.pdf
---
**Q1:** More Evidence.
**A1**:
- Claim 1: To demonstrate the generalization of the homogeneous pattern phenomenon, we conduct experiments and visualizations on both Pascal VOC and Cityscapes (1/16 partition) using ResNet-50.
(1) Single-category visualizations (Fig. 5 & 6 in link) highlight the top-4 most frequently predicted classes in each dataset. These results show that pixels with similar patterns within a class tend to share homogeneous semantics, while those from different classes exhibit distinct patterns. This supports our claim that “pixels with similar patterns tend to share homogeneous semantics compared to different patterns.”
(2) Multi-category visualizations offer a global perspective: correctly predicted pixels (darker) and incorrectly predicted ones (lighter) form well-separated clusters in the embedding space, further validating the link between visual patterns and semantic consistency.
- Claim 2: To clarify how Fig. 1(a) is generated, we provide a step-by-step explanation in Fig. 7 (link).
Step (1) shows the confidence distributions of correct and incorrect predictions on Pascal VOC (1/16 split).
Step (2) defines true/false positive rates (TPR/FPR), while Steps (3) and (4) illustrate the trade-off between them under different confidence thresholds. Ground truth for unlabeled data is used here only for analysis.
- Claim 3: In Fig. 1(c) (or Tab. 7 in link), we quantitatively measure the embedding similarity between pixels with correct and incorrect pseudo labels. For each class, we compute the similarity between each predicted pixel and the sets of correct/incorrect pixels, average the results, and normalize them class-wise.
The results show that correctly predicted pixels have much higher average similarity to other correct pixels (0.925) than to incorrect ones (0.075), and vice versa. These findings provide numerical evidence that aligns with the qualitative observations in Fig. 1(b).
---
**Q2**: More Experiments.
**A2:**
- Exp 1: Agent Selection Effectiveness & Class Imbalance
(1) We evaluate the top-1% confidence-based selection strategy on Pascal VOC (1/16 split, ResNet-50). As shown in Fig. 8 (link), the high precision of selected pixels for both positive and negative agents confirms the effectiveness of our confidence-based strategy.
(2) Pascal VOC itself exhibits severe class imbalance (e.g., background has >200× pixels than bicycle). Cityscapes and COCO show even larger head-to-tail ratios (>400 and >10,000). Despite this, our agent selection strategy (Algorithm 1), which samples clean examples in a class-balanced manner, remains robust—consistently improving performance across datasets (Tables 1–4).
(3) As for using labeled GT to construct agents, we’ve tested this on Pascal VOC (92 labeled images) with UniMatch as baseline (Tab. 10 in link). Due to the known distribution gap between labeled and unlabeled data (as noted in DAW, SoftMatch), agents built from GT-labeled pixels underperform compared to our confidence-based method.
- Exp 2: Design Choices for AgScore
(1) Incorporating Negative Agents: Using only positive agents (Row 1) omits key contrastive signals, leading to inferior performance. Adding negative agents (Rows 2/3) in a proportional form introduces nonlinearity that normalizes and stabilizes similarity scores, enhancing label separability.
(2) Exponentiation for Score Scaling: Applying the exponential function to cosine similarity (Row 3) amplifies score differences, further improving performance—aligning with our theoretical motivation.
- Exp 3: Orthogonal Selection Strategy
(1) Our orthogonal selection strategy is tailored for negative agent construction, selecting diverse, low-confidence samples to ensure agent purity (Fig. 8).
(2) This enhances semantic coverage and improves the separation between correct/incorrect predictions, supporting our theoretical design.
(3) In practice, AgScore serves as a general plug-in compatible with various methods (e.g., with different loss functions), making our orthogonal selection both reasonable and flexible.
- Exp 4: Sensitivity to Selection Ratio Experiments in Fig. 9 (link) show that raising the pixel selection ratio to 2% reduces accuracy. Our 1% threshold is thus a simple yet effective default. While not heavily tuned, this setting offers room for further optimization based on dataset characteristics.
We will include this analysis in the revised version.
---
**Q3:** Theory.
**A3:** Please refer to the answer **A3** of reviewer bo7S.
---
**Q4:** Others.
**A4:** We will improve the typos and add more discussions due to space limitations.
---
We hope our response can resolve your concern. Have a nice day! | Summary: The authors introduce “AgScore” (Agent Score), a scoring function to filter out unreliable pseudo labels at the pixel level in order to improve the performance of existing semi-supervised semantic segmentation (SSSS) methods. Unlike prior work that primarily relies on high-confidence thresholding in the prediction (logits) space to decide which pseudo labels to use, this paper focuses on the phenomenon of “homogeneous pattern” in the feature/embedding space. Specifically, the authors first construct two sets of agents: positive agents (high-confidence, presumably correct pseudo labels) and negative agents (low-confidence, presumably incorrect pseudo labels). chosen through an orthogonal selection strategy ensuring semantic diversity. Then, a function computes each unlabeled pixel’s similarity to these two agent sets. If a pixel is more similar to the positive agent set than the negative agent set, the method deems its pseudo label more reliable; otherwise, it is filtered out. Empirically, integrating AgScore into existing SSSS frameworks such as FixMatch, UniMatch, and RankMatch consistently improves segmentation performance across popular benchmarks like Pascal VOC, Cityscapes, and COCO.
Claims And Evidence: The core claim that incorporating feature-space similarity (via positive and negative agents) enables more precise filtering of pseudo labels than standard confidence-thresholding alone is well supported by the experiments conducted integrating AgScore into three well-known baselines.
Methods And Evaluation Criteria: * Evaluation was done on three standard SS datasets (Pascal VOC, Cityscapes, COCO) using various data partition protocols following the practices in the field.
* AgScore is integrated into three different SS frameworks with improved performances in all cases, proving its effectiveness.
* Ablation studies (Tables 5, 6 and Figure 3 from the supplementary) support the design choices.
Theoretical Claims: I have not properly assessed the correctness of the theoretical claims and the proofs, but I confirm I have looked over them, and nothing evidently stands out as wrong.
Experimental Designs Or Analyses: These are sound as they follow the general practices in the field.
Supplementary Material: I have reviewed the supplementary material also since many of the results (quantitative and qualitative) referenced in the main submission end up in the supplementary.
Relation To Broader Scientific Literature: * The authors place their work in the context of existing semi-supervised segmentation approaches that rely on teacher-student frameworks, including FixMatch and others. They also connect their approach to contrastive learning in the feature space or methods that propose robust pseudo-label selection.
* The authors also relate their approach to relevant classical SSL ideas (like consistency regularization and pseudo-label thresholding).
Essential References Not Discussed: Overall, references to major prior SSSS methods (e.g., FixMatch, MeanTeacher, ST++, ReCo, etc.) are included. Even very recent ones like RankMatch (2024).
Other Strengths And Weaknesses: • Strengths:
- The agent construction idea is simple but effective and can be integrated with minimal overhead into multiple SSSS frameworks.
- The experimental validation systematically shows improvements.
- The theoretical discussion is a welcome addition.
• Weaknesses:
- The approach is based on carefully sampling from the top/bottom confidence. Though the paper includes ablations, there might be edge cases where strong multi-class overlap or severely noisy unlabeled sets pose challenges.
- The approach’s success depends on the assumption that extremely high-confidence pixels are reliably correct. This is usually true for well-trained teacher networks but might occasionally fail early in training or with domain-shift data.
Other Comments Or Suggestions: I recommend the authors have a closer look at their paper, there are still some typos to deal with and paragraphs ("trad-off") that need rephrasing (Page 1, column 2, L45-48) to improve clarity (Table 1 mentions bolded results, but none of them are bolded). But the most alarming thing is the confusion the others make between AgScore and AugSeg (a method published and referenced in CVPR 2023), which the authors mentioned as their own when here the authors should have referred to AgScore. I treated this as an error; otherwise, using "Our AugSeg" breaches the double-blind submission restriction.
Questions For Authors: What it's currently missing from the experiments is a proper assessment of AgScore's robustness when the teacher model is underfitted in the early stage of training. Are there any warm-up strategies to ensure reliable positive agents from the start? Also, I am curious to know this, especially for domain shifts in a semi-supervised setting. Does the "top confidence = correct" assumption still hold there? Any insights from the authors are welcomed.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for taking the time to share your comments in the review assessment, as well as for acknowledging the **well-supported core claim**, **simple but effective idea**, **systematic experimental validation**, and **insightful theoretical analysis**. We provide a detailed point-by-point response to your comments.
Note that the following **link** refers to
https://anonymous.4open.science/r/AgScore/Rebuttal.pdf
---
**Q1**: Robustness of AgScore in handling edge cases.
**A1**: We appreciate your insightful question. Indeed, edge cases like **large domain shift**, **strong multi-class overlap** or **severely noisy unlabeled data** pose significant challenges to semi-supervised learning. To mitigate these limitations, our orthogonal selection strategy for constructing the negative agent set aims to select representative and diverse negative samples, covering a broader semantic space of incorrect pseudo-labels. This helps to some extent in handling domain shift, class overlap and noisy data. However, we acknowledge that in extremely severe edge cases, the baseline methods may struggle to produce effective predictions, which could hinder the selection of reliable agents and the evaluation of pseudo-labels.
In this paper, we only focus on the conventional semi-supervised setting, where the baseline methods generally yield effective predictions that support our approach. Your valuable suggestion inspires us to explore extending our method to extreme cases in future work, which is crucial for enhancing the robustness of semi-supervised segmentation.
---
**Q2**: Robustness of AgScore when the teacher model is underfitted.
**A2**: Indeed, the key to the success of our method lies in selecting reliable positive and negative agents. As shown in Fig. 9 in **link**, we record the precision of the selected positive and negative agents after each training epoch. The agents are selected based on the top-1% and bottom-1% confidence scores, respectively. Notably, even after just the 0-th epoch, the precision of both positive and negative agents is already satisfactorily high. This suggests that the teacher model can provide reliable agents for AgScore almost right from the start of training. Consequently, to maintain the simplicity of AgScore, we do not employ any specific warm-up strategies, as the experiments demonstrate its effectiveness without such strategies. We concur that carefully integrating warm-up strategies could potentially lead to better performance, and we leave this as a direction for future investigation.
---
**Q3**: Typo and "AugSeg" misuse.
**A3**: Thanks for your meticulous review and valuable feedback. We apologize for the oversight in mistakenly referring to "AgScore" as "AugSeg" in the Experiments section. We will carefully review the manuscript, correct these errors, and improve the clarity of our expressions.
---
We hope our response can resolve your concern. Please do not hesitate to let us know if you have further questions :) | null | null | null | null | null | null |
AffinityFlow: Guided Flows for Antibody Affinity Maturation | Accept (poster) | Summary: This manuscript proposes an alternating optimization framework for designing antibodies. In the first stage of the cycle, for a given fixed sequence, structures are generated with high binding affinity using a (structure-based) predictor guidance of AlphaFlow. In the second, the structures are inverse-folded to create mutated sequences and selected by a sequence-based affinity predictor. The cycles may be repeated to accumulate mutations. A key feature of the framework is co-teaching of the structure-based and sequence-based affinity predictors, based on selecting a subset of generated instances based on prediction consensus and updating the predictors based on just the selected subset. Experiments demonstrate superior affinity improvement and antigen specificity and competitive naturalness relative to methods based on protein language models trained on a large corpus, a sequence-based generative model, and structure-based generative models.
## update after rebuttal
The authors have incorporated my comments, importantly ones concerning accessibility to the general non-bio audience. The new experiments with gg-dWJS are convincing. I've raised my score to an accept.
Claims And Evidence: The paper claims "state-of-the-art performance in affinity maturation" but only uses the in silico $\Delta \Delta G$ energies for guidance and evaluation. $\Delta \Delta G$ has often been shown to correlate poorly with affinity (as measured by the dissociation constant KD), depending on the Rosetta protocol used. Mason et al. 2021, for instance, did not observe a significant correlation, and there are efforts to improve Rosetta protocols to improve the agreement (Das et al. 2017). The key challenge in affinity maturation originates from the poor mapping between in silico proxies and experimental KD; without addressing this, I believe the paper's claim should be phrased less strongly than SOTA in "affinity maturation." While guided generation based on $\Delta \Delta G$ is still interesting and obtaining experimental KD data would be very expensive, the experiment should be considered a proof-of-concept of the proposed framework rather than a biologically significant scientific result.
```
Mason, Derek M., et al. "Optimization of therapeutic antibodies by predicting antigen specificity from antibody sequence via deep learning." Nature biomedical engineering 5.6 (2021): 600-612.
Dias, Raquel, and Bryan Kolaczkowski. "Improving the accuracy of high-throughput protein-protein affinity prediction may require better training data." BMC bioinformatics 18 (2017): 7-18.
```
Methods And Evaluation Criteria: SAbDab-nano is a widely used dataset of antibody structures for sdAb. Functionality, specificity, and rationality (naturalness) are reasonable metrics capturing complementary aspects of generated designs. For functionality, it would have been instructive to evaluate on test $\delta \delta G $ values based on a few other Rosetta protocols.
Theoretical Claims: The paper does not make any theoretical claim.
Experimental Designs Or Analyses: - While AffinityFlow seems to outperform baselines in functionality and specificity, the metric values are quite close and it is difficult to assess significance without repeated runs.
- The ablation study is comprehensive and makes it clear that all components of the framework (multiple iterations, predictor-corrector, AlphaFlow, biophysical energy data, and selection) are helpful.
- The one sequence-based generative model baseline (dWJS) is unconditional and does not use any predictor guidance. For a "fair" comparison, dWJS should only be trained on the $\Delta \Delta G < 0$ instances.
- Readers not familiar with antibody design will have trouble understanding the paper, as it does not introduce domain-specific concepts. Examples: change in binding free energy (what it being negative means), CDR and its regions, how many amino acids there are (not mentioned).
- Because the metrics are all in silico, Section 4.6 (Case Study) is especially important for making the case that the generated designs are biologically meaningful. But it is currently very difficult to read without domain expertise. Details of the random forest regression are not described, even in the Appendix. Please describe thoroughly the connections to prior experimental studies such as Li et al. 2020.
- What value does the structure-based predictor $\hat f_\beta$ predict?
- There is no discussion of the form of the sequence-based and structure-based predictors.
Supplementary Material: I reviewed all of the supplementary material.
Relation To Broader Scientific Literature: This work borrows from the latest developments in structural ensemble generation methods such as AlphaFlow, inverse folding (derivatives of ProteinMPNN), and guided generation (predictor guidance). It combines techniques from each of these fields under a single pipeline, to generate high-affinity antibodies in a manner informed by both sequence and structure. Simple tricks like consensus-based sample selection and correction using Amber relaxation are important contributions that improve performance.
Essential References Not Discussed: There is a substantial body of work in predictor guidance for both sequence-based and structure-based generative models. Please see the references in Meng et al. 2024 for a comprehensive list.
```
Meng, Fanxu, et al. "A comprehensive overview of recent advances in generative models for antibodies." Computational and Structural Biotechnology Journal (2024).
```
Other Strengths And Weaknesses: - While the paper is creative and grounded in its combination of existing ideas, it is currently very difficult for a non-antibody expert to read. Please see "Experimental Designs Or Analyses" regarding the strengths and weaknesses of the experiments, questions about experimental details, and suggestions for improving the exposition for the lay audience.
- Please see "Claims And Evidence" for a discussion of the interpretation of Rosetta energies.
Other Comments Or Suggestions: N/A
Questions For Authors: Please see "Experimental Designs Or Analyses" regarding the strengths and weaknesses of the experiments, questions about experimental details, and suggestions for improving the exposition for the lay audience.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ## General Reply
Thank you for your insightful comments—they’ve greatly improved our manuscript. We’ve addressed each point and will update the manuscript accordingly.
## Claims and Evidence
> poor mapping
Thank you for your feedback. We agree that the gap between in silico proxies and experimental KD is a key challenge. Accordingly, we will revise the abstract to state: "Our method, AffinityFlow, achieves state-of-the-art performance in proof-of-concept affinity maturation experiments."
## Experimental Designs Or Analyses
> difficult to assess significance
We acknowledge that AlphaFlow's inference involves computationally expensive protein conformation modeling, which limited our ability to perform repeated runs. Nonetheless, the reported performance differences, though modest, consistently favor AffinityFlow in functionality and specificity.
> dWJS should be trained on ddG<0.
As noted in Section 4.2, we use a sequence-based predictor to select top candidates for dWJS, effectively conditioning dWJS. Moreover, dWJS is trained solely on antibody sequences without antigen context, per the original paper, so filtering by ddG < 0 is not applicable. We also compare the guided version (gg-dWJS) in QKfP's rebuttal, which still underperforms our method.
> trouble understanding
We have described the change in binding free energy as the difference between the free energies of the bound and unbound states in Section 2.4 (Lines 127–130). A negative value indicates that the overall free energy of the system decreases upon binding, meaning that the antibody–antigen interaction is energetically favored. We will add these in Section 2.4 (Line 127).
An antibody consists of two heavy chains and two light chains with a similar overall structure. Its specificity is determined by six variable regions known as Complementarity Determining Regions (CDRs), denoted as H1, H2, H3, L1, L2, and L3. Typically, heavy chain CDRs range from $8$ to $16$ amino acids, while light chain CDRs range from $3$ to $10$ amino acids. We will include these in Section 2.1 (Line 80).
> difficult to read
(1) Random forest: We used scikit-learn’s RandomForestRegressor with 100 decision trees, training on mutation types (input) and Rosetta-predicted ΔΔG (output). Model was validated using R² on a 20\% held-out test set. Feature importances revealed Ala105Leu as the most influential mutation.
(2) Connections to prior studies: Our case study uses the MR17 nanobody (PDB: 7D30) from Yao et al. (2021), which has a reported KD of 83.7 nM. Li et al. (2021) later introduced a mutant, MR17m, with a Lys99Tyr substitution that improved IC50, indicating higher potency (i.e., requiring less antibody to achieve the same effect). They also suggested Lys99Trp could be even more effective—an uncommon mutation that AffinityFlow independently identified.
We will incorporate these discussions into Section 4.6.
> predictor predict
The structure-based predictor outputs the negative value of the binding affinity (Kd). We will add this clarification in Section 2.4 (Line 131).
> form of predictors
We describe the architectures of both the sequence-based and structure-based predictors in Section 2.4, with detailed hyperparameters provided in Section 4.3. To further clarify, we will include more details regarding the MLP and GVP head in Section 4.3.
## Essential References Not Discussed
> see the references in Meng et al. 2024.
Thank you for highlighting this survey. We have reviewed Section 2.2.3 ("Hybrid generative models") of Meng et al. (2024) and confirm that key methods such as DiffAb and Chroma have been discussed in our paper. We note, however, that most methods listed in Meng et al. (2024) do not specifically address affinity maturation. To clarify, we will add to Appendix A: "ABGNN pre-trains a novel antibody language model and introduces a one-shot approach for generating both sequence and structure of CDRs. AbDiffuser leverages domain knowledge and physics-based constraints to enhance diffusion modeling. AlphaPanda integrates transformer, 3DCNN, and diffusion models for joint sequence-structure co-design. A more detailed overview can be found in Meng et al. (2024). Notably, these methods primarily focus on general antibody design rather than specifically targeting affinity maturation."
## Other Strengths And Weaknesses
> very difficult to read
In addition to the points in “Experimental Designs or Analyses”, we will add following clarifications: (1) dSASA (change in solvent-accessible surface area) reflects how well hydrophobic residues are buried and how closely the antibody and antigen interact; (2) Shape complementarity measures how well the two proteins fit together. Both metrics indicate interface quality.
Adolf-Bryfogle et al. (2018) assessed these values for all naturally occurring antibody-antigen interfaces in PDB. The metrics we calculate using Rosetta for our designs, fit well within the distribution observed in that paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for incorporating my feedback, a bulk of them concerning accessibility to the general non-bio audience. The new experiments with gg-dWJS are convincing. I'll raise my score to an accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback and support! We're glad the revisions improved accessibility for a broader audience and that the gg-dWJS experiments addressed your concerns. We appreciate your decision to raise the score. | Summary: The authors propose a pipeline to optimize sequences with structural guidance. AffinityFlow builds on AlphaFlow, a sequence-conditioned generative model. They present a two-stage optimization process: first, structure generation using a fixed sequence to guide the structure toward high binding affinity, followed by inverse folding and updating the initial sequence. The authors employ a co-teaching module to update the sequence after the inverse folding step.
Claims And Evidence: The claims are supported by experimental results.
Methods And Evaluation Criteria: The methods utilize established tools from the literature and combine them with a predictor. The clarity of the methods could be improved, especially regarding the training of the predictor. The evaluation appears sound, although I would suggest adding comparisons of inference time and comparing against methods that also utilize iterative optimization or refinement (if this is not already the case).
Theoretical Claims: There are no theoretical claims made in the paper.
Experimental Designs Or Analyses: The experiments seem reasonable, as the authors consider modifications on three CDR regions of an antibody and compare against multiple existing methods.
I think it would be interesting to compare against ESM combined with your sequence predictor. I believe it is possible to perform an MCMC where you use your predictor to accept or reject proposals. This could help understand which parts of the pipeline are the most important (similar to your ablation study in Table 2).
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: The key contributions of the paper relate to other works on designing pipelines for generating new proteins while steering generation toward certain properties, such as binding affinities.
Essential References Not Discussed: I believe BindCraft is relevant to this work, as the authors optimize a sequence based on a structure network in an iterative process.
Other Strengths And Weaknesses: The method seems to improve on existing methods for specific tasks.
However, I think the method may require more time during inference. I would encourage the authors to report inference time and peak memory usage for each method (at least for their method).
Other Comments Or Suggestions: I don't have any other comments or suggestions.
Questions For Authors: - I might have missed it, but how do you train $f_\beta$ in Eq. 7? Is it a pre-trained energy function? Writing the loss function used to train the structure-based predictor could improve clarity.
- I am slightly confused with the notation $f_\alpha$ and $f_\beta$. One is for sequences and the other for structures, correct?
- I am not familiar with all the methods used for comparison. Is there a method that also includes a refinement/optimization loop? It would be great to include one if that is not already the case, or directly compare with [1] BindCraft if possible.
[1] Pacesa, M., Nickel, L., Schellhaas, C., Schmidt, J., Pyatova, E., Kissling, L., ... & Correia, B. E. (2024). BindCraft: one-shot design of functional protein binders. bioRxiv, 2024-09.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## General Reply
Thank you for your constructive feedback, which has improved the clarity and rigor of our paper. We have addressed all points and will revise the manuscript accordingly.
## Methods And Evaluation Criteria:
> The clarity of the methods (predictor training).
To further clarify our method, we highlight the following points:
(1) Predictor Architecture: We described the predictor architecture in Section 4.3, where $\alpha$ and $\beta$ represent the parameters of the sequence-based and structure-based predictors, respectively. We will explicitly reiterate these parameter definitions in Section 4.3 to improve readability.
(2) Predictor Training: The training of our predictors involves two phases: (a) Supervised Training: Initially, we train both predictors using labeled antibody-antigen sequence/structure data and their affinity labels. Specifically, we optimize the parameters by minimizing the mean squared error (MSE) loss: $(f_{\alpha}(a) - \Delta G)^2$ for the sequence-based predictor and $(f_{\beta}(x) - \Delta G)^2$ for the structure-based predictor. Here $a$ and $x$ denote the protein sequence and structure respectively and $\Delta G$ corresponds to the negative of affinity. We will explicitly present these in Section 2.4. (b) Co-teaching Fine-tuning: we further refine both predictors using Rosetta-generated labeled data, optimizing the loss in Eq. (9), to further enhances performance.
> comparisons of inference time and comparing against methods that also utilize iterative optimization or refinement (if this is not already the case).
We have compared the inference time of our method with language model-based approaches in Appendix D. For methods that do not rely on language models, their iterative optimization process takes less than $10$ seconds per sample. Although these alternative methods are more efficient, in the context of antibody design, achieving high-affinity designs is prioritized over computational speed.
## Experimental Designs Or Analyses
> I think it would be interesting to compare against ESM combined with your sequence predictor.
Thank you for the insightful suggestion. Our current ESM baseline already incorporates the sequence predictor to select the top three sequences per antigen (Section 4.2).
To further explore your idea, we implemented an MCMC variant: At each step, we use ESM to identify the top 20 most probable mutations, randomly choose one mutation as a proposal, and use the affinity predictor to compute its acceptance probability. We repeat this procedure for a total of 9 steps for consistency.
Evaluating the resulting sequences, we obtain scores for IMP, Sim, and Nat of 65.6, 0.562, and 0.360, respectively. While the IMP score slightly improves (from the original 64.0 to 65.6), it remains inferior to our proposed method. The drop in Sim likely stems from strong antigen-specific guidance every step, while Nat improves due to MCMC’s conservative acceptance. We will incorporate these discussions into Section 4.2.
## Essential References Not Discussed
> BindCraft relevant
Thank you for highlighting this work. We will add the following to Appendix A:
"BindCraft adopts AF2-multimer to generate a binder backbone and sequence given a known target protein structure, subsequently optimizing the non-interface regions using ProteinMPNN. However, BindCraft is not directly comparable to our method, as it specifically targets binder design with known target structures, whereas our setting focuses on affinity maturation through mutation of existing antibody sequences without direct access to target structures. Moreover, the primary goal of BindCraft is generating new binders, rather than improving binding affinity of existing antibodies."
## Other Strengths And Weaknesses
> report inference time
Our method does require more inference time due to AlphaFlow’s realistic structure modeling, as discussed in Appendix D. While less efficient than alternatives, it consistently yields better designs. In practice, where wet-lab evaluation dominates cost and time, optimization quality is prioritized over computational speed.
## Questions For Authors:
> how train $f_{\beta}$ in Eq. 7
We first train $f_{\beta}$ using the MSE loss and then fine-tune it with the loss in Eq. (9). It is not a pre-trained energy function; rather, it is a property predictor built upon the ESM2-GVP backbone. We have described this training procedure and loss in the rebuttal of "Methods and Evaluation Criteria".
> confused with notation
Yes, $f_{\alpha}(a)$ refers to the sequence-based predictor, and $f_{\beta}(x)$ refers to the structure-based predictor.
> not familiar with comparison methods
All non-language-model-based methods used for comparison—including dWJS, DiffAb, AbDPO, and GearBind—employ iterative optimization loops for refinement. Our results demonstrate that AffinityFlow consistently outperforms these approaches. | Summary: The work combines classifier (gradient) guidance with Alphaflow for flow matching based antibody structure optimization to enhance binding affinity and performs inverse folding with ProteinMPNN to retrieve antibody sequences for synthesis. It also proposes a noise reduction framework (co-teaching) for labeled data whilst training affinity predictors. The authors evaluate their method's performance on SAbDab dataset which shows improvements over the baselines.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Given the nature of the work which is optimization, I expect more baselines that uses optimization techniques. Methods such as ESM and AbLang are not particularly suitable for optimizing for Antigen affinity. The author's also use dWJS as a baseline. Given their method uses classifier guidance, why not use the classified guidance version of the dWJS, i.e., gg-dWJS? Given the limited nature of experiments, I think such expectation is justified.
Theoretical Claims: N/A
Experimental Designs Or Analyses: See Methods And Evaluation Criteria.
Supplementary Material: Yes. I checked the materials.
Relation To Broader Scientific Literature: The work adds classifier guidance to flow based method for antibody sequences and uses inverse folding to convert to sequence (so that they could be synthesized).
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths
- The writing in general is concise and easy to understand
- The proposed method works as claimed shown through the experiments (although I remain unconvinced about the baselines uses, see evaluation and criteria)
Other Comments Or Suggestions: N/A
Questions For Authors: - Given the pitfalls of inverse folding (noisiness for conversion of sequence to structure) and structure optimization (unrealizability of structures), how do you guarantee their method's generated sequences are synthesizable? Did you try synthesizing any samples suggested by your method?
- Did the authors attempt any in vitro experiments?
- Why is flow matching a better option for this work?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ## General Reply
Your insightful comments have greatly contributed to improving our manuscript, and we sincerely appreciate your time and effort. Each point you mentioned has been addressed, and the manuscript will be updated to reflect these improvements.
## Methods And Evaluation Criteria:
> Given the nature of the work which is optimization, I expect more baselines that uses optimization techniques. Methods such as ESM and AbLang are not particularly suitable for optimizing for Antigen affinity. The author's also use dWJS as a baseline. Given their method uses classifier guidance, why not use the classified guidance version of the dWJS, i.e., gg-dWJS? Given the limited nature of experiments, I think such expectation is justified.
As discussed in Section 4.2, we employ the same trained sequence-based affinity predictor for final sequence selection across all baselines, including ESM, AbLang, and dWJS. This strategy inherently enables affinity optimization, making these methods valid baselines for comparison.
Thank you for pointing out gg-dWJS. To clarify, we have further conducted experiments with gg-dWJS, employing the trained affinity predictor to guide sampling, following the original gg-dWJS paper. Specifically, for the CDR-H3 region, gg-dWJS achieves IMP, Sim, and Nat scores of 67.2, 0.520, and 0.291, respectively, which are still worse than our method. We observe that the IMP score does not significantly improve compared to the original dWJS (which is 66.1). This suggests that the affinity predictor used in the post-selection step of the original dWJS already contributes effectively to guided generation. We will incorporate this additional discussion into Section 4.4 to clarify this point further.
## Questions For Authors:
> Given the pitfalls of inverse folding (noisiness for conversion of sequence to structure) and structure optimization (unrealizability of structures), how do you guarantee their method's generated sequences are synthesizable? Did you try synthesizing any samples suggested by your method?
We address the concern regarding synthesizability from three key perspectives:
(a) Realistic Structure: (1) We use AlphaFlow as our protein structure generation framework, which has demonstrated its effectiveness in producing realistic protein conformations. (2) In addition, our Predictor-Corrector method incorporates Amber relaxation at every iteration to refine the protein coordinates, ensuring the generated structures are physically realistic.
(b) Limited Mutation Per Iteration: (1) Although the inverse folding process (mapping structure to sequence) can be noisy, we mitigate this issue by restricting mutations to only 1-3 positions per stage rather than generating the entire sequence at once. This targeted approach reduces noise; (2) Furthermore, we employ a post-selection process using a trained sequence-based predictor to filter out sequences with low predicted affinity.
(c) Biologically Meaningful Case Study:
We present a detailed case study in Section 4.6 to demonstrate the biological relevance of our generated designs. Notably, our AffinityFlow model independently identified the Lys99Trp mutation—a mutation previously proposed in Li et al. (2020) as a promising improvement in antibody potency.
These strategies collectively enhance the likelihood that our generated sequences are synthesizable and biologically meaningful. Furthermore, we report Nat (the inverse of perplexity) as a supporting metric. As shown in Table 1, AffinityFlow achieves the highest Nat score among all non-language model-based methods, indicating stronger sequence plausibility.
> Did the authors attempt any in vitro experiments?
We have not conducted in vitro experiments at this stage.
> Why is flow matching a better option for this work?
We adopt flow matching primarily due to the availability of a pre-trained AlphaFlow framework, which has demonstrated exceptional performance in generating realistic protein conformations. Additionally, recent studies [1, 2, 3] show that flow matching can offer superior effectiveness and efficiency compared to diffusion models.
It is also important to note that our proposed alternative framework and co-teaching module are not inherently tied to the AlphaFlow framework; they can, in principle, be implemented within any sequence-conditioned generative model of structure.
- [1] Lipman Y, Chen R T Q, Ben-Hamu H, et al. Flow matching for generative modeling[J]. arXiv preprint arXiv:2210.02747, 2022. ICLR 2023
- [2] Le M, Vyas A, Shi B, et al. Voicebox: Text-guided multilingual universal speech generation at scale[J]. NeurIPS 2023.
- [3] Adam Polyak, et al. Movie gen: A cast of media foundation models. 2024.
---
Rebuttal Comment 1.1:
Comment: Thanks for your efforts. I have updated my scores to an accept. Please incorporate the changes to your manuscript.
- "Why is flow matching a better option for this work?" -> the answer to this should be added to the motivation in the intro.
- A note on synthesizability somewhere in the conclusion.
- The results on the gg-dWJS. Minor: the citation could be updated [1]
1. Ikram, Zarif, Dianbo Liu, and M. Saifur Rahman. "Gradient-guided discrete walk-jump sampling for biological sequence generation." Transactions on Machine Learning Research.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback and support! We’re glad to hear you’ve updated your score to an accept. We will incorporate the requested clarifications into the introduction and conclusion, update the gg-dWJS citation, and ensure all revisions are reflected in the final manuscript. | Summary: The paper proposes the AffinityFlow model and constructs an optimization framework for generating high-affinity antibodies. First, it utilizes a structure-based affinity predictor to guide the generation of antibody structures. Subsequently, it creates sequence mutations through inverse folding. This model enables a sequence-conditioned generative model of structure, iteratively guiding the generation of high-affinity antibodies. It has been validated on the SAbDab dataset.
## Update After Rebuttal
Thank you for the authors' response, which has resolved my confusion. I have increased my score.
Claims And Evidence: I think the architecture of the model is not clearly described. There is relatively little discussion about the Predictor-Corrector and Sequence Mutation.
Methods And Evaluation Criteria: It is reasonable to conduct the evaluation and select the indicators on the SAbDab dataset.
Theoretical Claims: I read about Guided Structure Generation, Sequence Mutation, but I'm still confused about the overall framework of the model and how the data is connected.
Experimental Designs Or Analyses: I think the experimental design is reasonable.
Supplementary Material: I read sections such as Related Work and Predictor Guidance in Flow Matching in the Appendix.
Relation To Broader Scientific Literature: Generating antibodies with high affinity is of great importance. However, currently, it may be limited by the relatively small amount of data with known affinity labels. Therefore, research on related artificial intelligence methods is highly necessary.
Essential References Not Discussed: I think the relevant literature has been covered.
Other Strengths And Weaknesses: Pros:
The process of the alternating algorithm model generating high-affinity is both interesting and reasonable.
Cons:
1. The description of model architecture is not very clear. Is the model end-to-end? How are the ESM-2, ESM2-GVP, AlphaFlow, etc. used organized? An algorithm or a framework of the model is needed for demonstration.
2. The SAbDab dataset is small. How can we ensure that the model does not overfit?
3. The paper lacks comparative experiments on runtime.
If you can clear up my confusion, I will raise my score because the experimental results are objective.
Other Comments Or Suggestions: There are no other suggestions here.
Questions For Authors: All the questions are in "Other Strengths And Weaknesses".
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ## General Reply
Thank you for your valuable feedback, which has greatly improved our manuscript. We have addressed each comment and will incorporate the revisions accordingly.
## Claims And Evidence
> I think the architecture of the model is not clearly described. There is relatively little discussion about the Predictor-Corrector and Sequence Mutation.
We provide additional details on both Predictor-Corrector and Sequence Mutation.
(1) Predictor-Corrector is composed of two components. Predictor corresponds to the protein coordinate generation process governed by the learned vector field, and Corrector refers to the Amber energy minimization used to refine the coordinates. We will add the sentence in Line 180.
(2) Sequence Mutation: At each iteration, we apply single-, double-, and triple-point mutations using ProteinMPNN. For each position, we calculate the probability difference between the current and alternative amino acids, selecting the mutation with the highest difference. Double- and triple-point mutations build sequentially on prior mutations. A sequence-based predictor selects the top K (K=3) sequences at each stage for further refinement. We will add this description at Line 191 for clarity.
## Theoretical Claims:
> I read about Guided Structure Generation, Sequence Mutation, but I'm still confused about the overall framework of the model and how the data is connected.
To clarify the overall framework and data flow, consider this example:
Given antibody and antigen sequences, our goal is to mutate the antibody to improve binding affinity. We begin by linking the antibody and antigen sequences and inputting this linked sequence into the AlphaFlow. Without predictor guidance, AlphaFlow generates conformations consistent with the linked sequence.
However, our objective is to obtain a protein conformation with higher binding affinity. To achieve this, we introduce predictor guidance during the AlphaFlow structure generation phase, a process we call Guided Structure Generation. This guidance steers the generated conformations toward those with high binding affinity. Once a high-affinity structure is obtained, we apply inverse folding to introduce targeted mutations—Sequence Mutation—and feed the updated sequence back into AlphaFlow. This iterative loop continues to optimize binding affinity. This process is described between Lines 42–73.
## Other Strengths And Weaknesses:
> The description of model architecture is not very clear.
Yes, the model is end-to-end. Our architecture is organized as follows:
(1) ESM-2 serves as the sequence-based predictor and is applied during post-selection (as shown in Figure 1) to choose high-affinity mutants for the next iteration.
(2) ESM2-GVP acts as the structure-based predictor and is used for predictor guidance, steering the noisy protein coordinates toward high-affinity conformations.
(3) AlphaFlow is the core algorithm, initially trained to transform noisy protein coordinates into clean conformations based on the input sequence. In our framework, AlphaFlow is further guided by ESM2-GVP to bias the generated structures toward higher binding affinity.
We will update Figure 1 to clearly label ESM2, ESM2-GVP, and AlphaFlow, and include an overall algorithm that demonstrates the complete framework in the revised paper.
> The SAbDab dataset is small.
We address this issue with the following strategies:
(1) AlphaFlow: AlphaFlow is pretrained on large-scale protein structures. In our method, this pretrained model remains frozen, thereby preventing it from overfitting the small dataset.
(2) Sequence-based (ESM-2) and Structure-based Predictors (ESM2-GVP): Both predictors leverage pretrained ESM-2 models, which have been pretrained on large-scale unlabeled data. During our training, we keep these pretrained backbones frozen and fine-tune only the prediction heads. This reduces the risk of overfitting due to the small number of learnable parameters.
(3) Augmented Dataset via Rosetta Labeling and Co-teaching: To further improve generalization, we introduce a co-teaching module. Initially, the predictors are trained on the limited SAbDab dataset. Next, we augment the training data by generating an additional $4,158$ labeled samples using Rosetta. The co-teaching module then mitigates noise from the augmented data, providing a richer dataset and reducing the risk of overfitting.
These measures collectively ensure that our models remain robust and generalize effectively, despite the limited size of SAbDab.
> lacks comparative runtime.
We address runtime in Appendix D: one iteration of our method takes around $10$ minutes for a protein of length $500$, compared to $11$–$18$ seconds for language model-based methods per sample. While these alternatives are faster, our approach yields better designs. In practical settings like antibody design, wet-lab evaluation is the major bottleneck, making optimization quality more critical than generation speed.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply. It has resolved my confusion to some extent. I've raised the score to 4. I hope you can add a detailed description of the model architecture in the camera-ready version.
---
Reply to Comment 1.1.1:
Comment: Thanks for your response. We're glad to hear the clarification helped. We appreciate the updated score and will make sure to include a detailed description of the model architecture in the camera-ready version. | null | null | null | null | null | null |
Tensor Product Attention Is All You Need | Reject | Summary: The authors propose a straightforward drop-in replacement for multi-head attention that they call Tensor Product Attention (TPA). The core idea is to compute queries, keys, and values using tensor products. The authors show that TPA can substantially reduce KV cache memory footprint, and that TPA can handle RoPE embeddings efficiently (by rotating keys before caching). They interpret other attention approaches (classic attention, MQA, GQA) as non-contextual versions of TPA. They further show that TPA is competitive with existing approaches in terms of pre-training perplexity and performance on popular downstream tasks.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Not in detail, but I didn't see any glaring issues.
Experimental Designs Or Analyses: The pre-training and downstream task evaluation seem reasonable.
Supplementary Material: I looked through some tables in the appendix.
Relation To Broader Scientific Literature: This work is related to previous attention approaches (MHA, GQA, MQA, MLA). It is also related to work on decreasing the memory footprint of the KV cache.
Essential References Not Discussed: No, not that I know of. I found the related work to be quite nice.
Other Strengths And Weaknesses: * I think the main strength of this paper is that the idea is simple/straightforward, and it seems to work well empirically. It seems like a promising approach for real-world use.
* Some minor weaknesses
* I think it would be nice if the results for 0-shot and 2-shot and small, medium, large, and XL models could be included in the main body of the paper somehow. Like maybe it table 2 and 3 could be replaced with a table of averages and details for different sizes and number of shots and the individual results could be saved for the appendix. The results in the main body currently seem a little cherry-picked.
* It seems to me that it's very fair to say that TPA is comparable to existing methods in terms of pre-training and downstream performance, but I don't know that the results support it being superior. I think the paper overstates this (e.g., "T6 exceeds the performance of standard Transformer baselines including MHA, MQA, GQA, and MLA across various metrics, including perplexity and a range of renowned evaluation benchmarks."). I think it would be better to focus on the decrease in KV cache cost.
Other Comments Or Suggestions: * Typos
* Last sentence on page 1 "The recently introduced..." has a typo.
* I found the math section a bit needlessly complex at times. It's understandable, and good enough I think, but I think it could be shortened and improved.
Questions For Authors: Please see above two sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and constructive feedback on our submission. Your comments have provided valuable insights that will help us improve the clarity and impact of our work. We appreciate your positive assessment and recommendation. Below, we address each of your points in detail and outline our planned revisions.
1. **Regarding the Presentation of Results:**
> Q1: "I think it would be nice if the results for 0-shot and 2-shot and small, medium, large, and XL models could be included in the main body of the paper somehow... The results in the main body currently seem a little cherry-picked."
A1: We appreciate your suggestion to enhance the presentation of our experimental results. We agree that a more comprehensive summary in the main text would provide a clearer and less selective view of TPA’s performance, directly addressing the concern about potential cherry-picking. To address this, we plan the following changes:
- We will replace the current Tables 2 and 3 with a new, consolidated table. This table will summarize the **average performance** (e.g., average perplexity and key downstream task accuracy) across small, medium, large, and XL model sizes for both 0-shot and 2-shot evaluations, offering a concise overview of TPA’s effectiveness compared to baselines like MHA, MQA, GQA, and MLA.
- The full, detailed results for each model size, task, and shot configuration, currently spread across the main text and appendix (Tables 2, 3, 6-11), will be consistently located in the appendix. This allows interested readers to explore specifics while keeping the main text focused.
2. **Regarding the Claim of Superiority vs. Comparability:**
> Q2: "It seems to me that it's very fair to say that TPA is comparable to existing methods... but I don't know that the results support it being superior. I think the paper overstates this... I think it would be better to focus on the decrease in KV cache cost."
A2: We value your perspective on the tone of our performance claims and agree that a more nuanced presentation, emphasizing the significant memory efficiency alongside competitive performance, better reflects our contribution. We will revise the paper as follows:
- We will adjust the language throughout the paper—particularly in the abstract(), introduction, and conclusion—to state that TPA achieves **competitive or comparable performance** relative to baselines, rather than asserting outright superiority. For instance, the sentence you highlighted will be rephrased similarly to: *"T6 achieves competitive performance compared to standard Transformer baselines... across various metrics, while offering significant memory savings."*
- We will strengthen the focus on TPA’s primary practical advantage: **KV cache reduction**. In Section 3.3, we will further highlight and clearly present the quantification showing that TPA can reduce the KV cache size by approximately 10x compared to MHA for typical configurations, explicitly linking this to the ability to handle much longer sequences under memory constraints.
3. **Regarding Typos and Mathematical Complexity (Other Comments):**
> Q3: "Last sentence on page 1 'The recently introduced...' has a typo. I found the math section a bit needlessly complex at times... I think it could be shortened and improved."
A3: Thank you for catching the typo and for your feedback on the mathematical presentation.
- We will correct the typo on page 1 and perform a thorough proofread of the entire manuscript.
- We will revise Sections 2 (Background) and 3 (Tensor Product Attention) to improve clarity and accessibility. This will involve consolidating notation where possible, ensuring all symbols are clearly defined upon first use, and refining explanations. We will also consider moving particularly lengthy derivations or detailed proofs (such as aspects of the FLOP analysis currently in Appendix A) to the appendix to maintain a smoother narrative flow in the main text, while ensuring the core methodology remains rigorously presented.
Thank you once again for your time and valuable suggestions. We are confident that incorporating these revisions will significantly strengthen the paper.
---
Rebuttal Comment 1.1:
Comment: These revisions sound great.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive and encouraging feedback. If you are satisfied with our work and feel our contributions deserve a higher score, we would sincerely appreciate your consideration in raising the score. | Summary: The paper proposes a new parameterization for the QKV activations that arguably is even simpler than the multi-head latent attention from deepseek.
The paper calls its method “tensor product attention” and connects to higher order tensor products in Appendix B but if I understood the paper correctly, all of their experiments are done with order 2 tensor, which are more easily understood as vector outer-products. The key innovation is to represent the query, key, and value matrices as the sum of $R_Q, R_K, R_V$ contextual vector outer-products (which makes them essentially low-rank matrices since the sum of K rank-1 matrices has to have rank ≤ K).
The paper describes a few different variants of their basic idea, such as making part of the QKV computation non-contextual , i.e. independent of the token embedding and the experiments seem promising, albeit incomplete.
Claims And Evidence: The main claim of the paper is that the “Tensor Product Attention” is a good way of parameterizing attention and the experiments in the paper test that by comparing MHA MQA / GQA / MLA vs TPA in the nanoGPT code base by training on a standard text dataset and evaluating benchmarks such as perplexity and lm-evaluation-harness.
Methods And Evaluation Criteria: yes
Theoretical Claims: The overall arguments in the paper seem sound and I checked the proof of theorem 1.
There is some mention of using higher order tensors and that Rope can still be integrated natively with higher order tensors in theorem 2 in Appendix B but I did not go through that proof.
Experimental Designs Or Analyses: no
Supplementary Material: no
Relation To Broader Scientific Literature: Tensor train, Block tensor train etc. are tensor decomposition methods that have been tried to ameliorate the memory bandwidth bottleneck by increasing the amount of compute done per parameter/byte transferred to the GPU. This paper proposes a novel way of using the tensor decomposition idea by decomposing the activations instead of the parameters themselves for computing attention.
In some senses this can also be thought of as an alternative implementation of Deepseek's low-rank/latent attention by presenting the attention explicitly as a sum of k rank 1 matrices.
Essential References Not Discussed: probably not.
Other Strengths And Weaknesses: n/a
Other Comments Or Suggestions: n/a
Questions For Authors: 1. The paper seems to only focus on order 2 tensor products and the equations in 3.1/3.2/3.3 seem to essentially implement low rank decompositions of Q,K,V matrices. Have you experimented with higher order tensors at all that are mentioned in Appendix B?
2. At the end of Appendix A there is a brief comparison of TPA vs MLA in terms of the computation required and it’s claimed that d_rope + d_c can inflate the dot product cost by roughly 4.5x to 9x compared to MQA. The numbers used to arrive at this conclusion seem to contradict the conclusions in DeepSeek v3 where MLA only causes an increase of 2.25x vs MQA. Could you please elaborate on this part a little more.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your insightful feedback and constructive comments. We have carefully considered all concerns and we provide detailed responses to your questions below.
> Q1: The paper describes a few different variants of their basic idea, such as making part of the QKV computation non-contextual, i.e. independent of token embedding and the experiments seem promising, albeit incomplete.
A1: Thank you for your attention to the non-contextual TPA. According to the experiment results of TPA small and medium models with non-contextual B shown below (compared with Tables 2, 6, 8, and 9 in the paper), the non-contextual versions are slightly worse than the current version of TPA. Therefore, to achieve better performance, the original TPA, TPA-KV only and TPA with non-contextual A are recommended.
| 0-shot | ARC-E | ARC-C | BoolQ | HellaSw. | OBQA | PIQA | W.G. | MMLU | SciQ | Avg. |
| ------------------ | ----- | ----- | ----- | -------- | ---- | ----- | ----- | ----- | ---- | ----- |
| TPA_nonctxB_small | 47.39 | 26.37 | 54.8 | 32.71 | 30.2 | 63.38 | 50.2 | 23.13 | 64.8 | 43.66 |
| TPA_nonctxB_medium | 55.43 | 29.69 | 58.32 | 40.77 | 34.4 | 66.92 | 51.38 | 25.66 | 71.1 | 48.19 |
| 2-shot | ARC-E | ARC-C | BoolQ | HellaSw. | OBQA | PIQA | W.G. | MMLU | SciQ | Avg. |
| ------------------ | ----- | ----- | ----- | -------- | ---- | ----- | ----- | ----- | ---- | ----- |
| TPA_nonctxB_small | 50.8 | 26.96 | 57.65 | 32.4 | 29.4 | 63.22 | 49.57 | 23.96 | 66.4 | 44.48 |
| TPA_nonctxB_medium | 61.2 | 30.2 | 55.93 | 40.45 | 34.4 | 68.23 | 51.78 | 26.11 | 78.1 | 49.6 |
> Q2: The paper seems to only focus on order 2 tensor products and the equations in 3.1/3.2/3.3 seem to essentially implement low rank decompositions of Q, K, V matrices. Have you experimented with higher order tensors at all that are mentioned in Appendix B?
A2: We just implemented experiments on small-size models with third-order TPA and that with only KV factorized. The performances on small models are shown below. It is worse than other TPA series models. Therefore, for small models, we recommend the second-order TPA but the high-order TPA is still potential for much larger models.
| | ARC-E | ARC-C | BoolQ | HellaSw. | OBQA | PIQA | W.G. | MMLU | SciQ | Avg. |
| ------ | ----- | ----- | ----- | -------- | ---- | ----- | ----- | ----- | ---- | ----- |
| 0-shot | 49.24 | 24.91 | 57.06 | 34.01 | 31.8 | 63.33 | 50.59 | 23.23 | 66.9 | 44.56 |
| 2-shot | 53.37 | 25.34 | 48.78 | 34 | 29.2 | 62.79 | 52.33 | 26.41 | 75.3 | 45.28 |
> Q3: At the end of Appendix A there is a brief comparison of TPA vs MLA in terms of the computation required and it’s claimed that d_rope + d_c can inflate the dot product cost by roughly 4.5x to 9x compared to MQA. The numbers used to arrive at this conclusion seem to contradict the conclusions in DeepSeek v3 where MLA only causes an increase of 2.25x vs MQA. Could you please elaborate on this part a little more.
A3: Thank you for your careful consideration of the comparison of computation between these attention mechanisms. Here's a clearer explanation:
- As described in Appendix A.8, the dot product cost is determined by the hyperparameters, including number of heads ($h$) and dimension for each head ($d_h$). Different sizes of these dimensions may result in differences in dot product computation cost. For MHA, MQA, GQA, the cost to compute $Q_i(x_T)K_i^\top$ is $\mathcal{O}(hd_hT)$ and for MLA, the cost is$\mathcal{O}(h(d_{\text{rope}}+d_c)T)$ if the current sequence length is $T$.
- For example, in DeepSeek V3, the hidden dimension is 7168 with 128 heads. Compared with MQA with dimension of 7168/128=56 for each head, MLA has $d_{\text{rope}} + d_c=192$. Therefore, according to the analysis in Appendix A.8, it has 192/56≈3.43x increase in computation cost but only causes an increase of **2.25x KV caches** (as described in DeepSeek V2 technical report) and has better performance than MQA. For smaller models, larger RoPE and compressed representation dimensions are needed to keep the superior performance of MLA, so a larger multiplier of computation is expected.
- Moreover, calculating $QK^\top$ is only one part of the attention mechanism, and we still need to calculate $\text{softmax}(QK^\top)$. Loading cached KV from memory needs time, so the speed is IO-aware. In modern GPU, the decoding phase's IO speed is crucial, so the less KV cache size, the faster the decoding speed is given the same computational FLOPs.
Thank you again for bringing this to our attention. We will revise this section and provide a more detailed analysis of the computational costs associated with different attention mechanisms based on your suggestions. We look forward to your further evaluation and are happy to provide any additional explanations if needed.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response, everything looks good. maintaining my rating, best wishes.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive and encouraging feedback. We are pleased to hear that our rebuttal has fully addressed your concerns. If you feel that our work deserves a higher score, we would sincerely appreciate your generous score increase. | Summary: This paper proposes Tensor Product Attention, which uses contextual tensor-decompositions. Based on TPA, the authors propose a new model architecture T6 for sequence modeling and adapt it with LLama and Gemma.
Claims And Evidence: N/A
Methods And Evaluation Criteria: The evaluation criteria make sense.
Theoretical Claims: N/A
Experimental Designs Or Analyses: relatively sound
Supplementary Material: yes,all.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strength:
1. A new architecture is proposed, which is valuable.
2. The writing is easy to follow and the paper is well-structured.
Weakness:
1. Despite reducing the number of parameters per token, TPA does not decrease the GPU memory usage. The matrices Q, K, V still have the shape [T, h, d_h], identical to that of MHA. In this sense, TPA can be understood as a parameterization technique that re-parameterizes the matrices Q, K, V while following the same attention computation pipeline as MHA. It is puzzling why TPA outperforms MHA with fewer parameters to form Q, K, V and the same attention computation method, yet still achieves better performance.
2. There is a lack of fundamental theories to explain why TPA is superior to MHA and MQA. Does this stem from TPA's stronger expressive power? If so, please provide the necessary theoretical analysis. Otherwise, the experiments presented in the paper are hardly convincing enough to demonstrate that TPA is better than other attention mechanisms.
3. The attempt to unify MHA and other attention mechanisms in Section 3.4 seems rather forced. The fact that TPA can be expressed in the form of a tensor product does not necessarily mean that its expressive power is stronger under its commonly used parameter settings (e.g., R<<h) and after adding trainable parameters to other constant vectors (such as e in Eq. 3.10, or the vector of all ones in MQA).
4. Experimentally, it is unclear when to use TPA-KVonly and when to use TPA. Why does TPA-KVonly perform better on medium-sized models, while TPA is superior on large models? The authors have not provided a clear explanation for this phenomenon. Moreover, compared to other attention mechanisms like MHA, the advantages of TPA are quite limited.
Other Comments Or Suggestions: See above.
Questions For Authors: See weakness.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your thorough review and constructive feedback on our submission. We appreciate your recognition of the novelty and clarity, and your detailed questions help us refine our manuscript.
> Q1: Despite reducing the number of parameters per token, TPA does not decrease the GPU memory usage. The matrices Q, K, V still have the shape [T, h, d_h], ...
A1: We appreciate your question about memory usage and performance.
- **Memory Savings:** TPA's primary memory gain is the inference KV cache. MHA caches full $K_t, V_t$ ($2 T h d_h$ memory). TPA caches only low-rank factors ($A_K(x_t), \tilde{B}_K(x_t)$), reducing per-token memory to $(R_K+R_V)(h+d_h)$. With typical low ranks ( $R_K= R_V=2$ ) and $d_h=12$, this yields substantial savings (>10x), enabling longer sequences. Furthermore, Appendix A details algorithms computing TPA attention without materializing full Q, K, V tensors. By working directly with factors, these methods can reduce computational cost (flops) and peak memory usage during the forward pass, complementing KV cache savings.
- **Performance Source:** TPA's improved performance stems from its Contextual Factorization. Unlike MHA's fixed projections, TPA dynamically constructs Q, K, V factors based on the token's hidden state $x_t$. This contextual adaptation provides an inductive bias, enhancing representational capacity. This is validated by TPA's consistently lower validation losses and perplexities (Figures 2, 3, 4). Appendix A further explores potential computational savings via factorized computation.
We will revise the paper to better distinguish computation memory (materialized vs. non-materialized) from inference KV cache benefits and clarify how contextual factorization drives performance.
> Q2: There is a lack of fundamental theories to explain why TPA is superior to MHA and MQA. Does this stem from TPA's stronger expressive power?...
A2: We appreciate your concern regarding theoretical backing for TPA's superiority.
- Our core theoretical argument rests on Contextual Factorization. Section 3.4 demonstrates that MHA, MQA, and GQA are special cases of TPA where factors are restricted to be non-contextual (fixed). TPA generalizes these by allowing factors to depend dynamically on the input $x_t$. This context-dependency is proposed as the source of enhanced expressive power.
- Furthermore, TPA's seamless integration with RoPE (Theorem 1) ensures effective use of relative positional information, a crucial aspect where other efficient attention mechanisms sometimes struggle.
- While formal proofs of superior expressive power are future work, the theoretical framing via contextuality, combined with strong empirical validation across multiple scales and tasks (Section 4), provides support for TPA's design.
We will revise to expand on these theoretical foundations.
> Q3: The attempt to unify MHA and other attention mechanisms in Section 3.4 seems rather forced. The fact that TPA can be expressed in the form of a tensor product...
A3: We appreciate your feedback that the unification felt "forced" and questioned its implication for expressive power at low ranks ($R\ll h$).
- The unification aims to show TPA's flexibility as a framework encompassing MHA/MQA/GQA as specific instances with non-contextual factors and particular ranks. TPA's innovation lies precisely in making these factors trainable and contextual.
- We acknowledge the expressiveness trade-off with rank R. However, even with $R\ll h$, our experiments consistently demonstrate that the *contextual* nature of TPA's factors provides performance advantages over the fixed, non-contextual factors of MHA/MQA/GQA, alongside significant memory savings.
We will revise Section 3.4 to better clarify that contextuality is the key differentiator and link it more explicitly to the empirical results.
> Q4: Experimentally, it is unclear when to use TPA-KVonly and when to use TPA. Why does TPA-KVonly perform better on medium-sized models, while TPA is superior on large models?
A4: Thank you for feedback on variant choice and performance dynamics.
- **Variant Performance:** TPA factorizes Q/K/V; TPA-KVonly factorizes only K/V. Our results show TPA slightly outperforms TPA-KVonly on medium models (353M), while TPA-KVonly slightly leads on large/XL models (773M, 1.5B). (Note: This corrects the potential misreading in the review). The reasons need further study, possibly related to optimization/capacity trade-offs at scale.
- **Significance:** While gains over MHA may seem modest, they are consistent and meaningful for LLMs. Crucially, TPA offers a **dual advantage**: competitive performance **PLUS** substantial (>10x potential) **inference KV cache reduction**. This memory efficiency enables much longer sequences, addressing a critical scalability bottleneck. This practical benefit underscores TPA's value.
We will enhance the manuscript to clarify these points. | Summary: The paper describes Tensor Product Attention (TPA), a type of attention mechanism, where queries, keys, and values are represented in low-rank factorized format. The authors claim the proposed method yields to the cache memory reduction during inference while preserving model quality. A neural network architecture T6 (built using TPA, up to 1.5B parameters) is compared with well known forms of attention such as Multi-Head Attention (MHA), Multi-Query Attention (MQA), Grouped-Query Attention (GQA), and Multi-Head Latent Attention (MLA). The authors also show compatibility with Rotary Position Embeddings (RoPE).
## Update after rebuttal
The comparison with other tensor approaches is still missing, though good point the authors have an optimized implementation and code for higher order experiments. The general idea of the paper is nice, and empirical evidence provided by authors in the rebuttal looks convincing, so I increase my score.
Please, include to the main part of the final version real memory and latency measurements, and at least literature overview of other tensor based methods.
Claims And Evidence: While several times in the paper authors claim significant memory savings (for example, 10× in section 3.3) due to TPA Factorized KV Caching or time cost reduction (e.g., 4.5x to 9x in Appendix A.8), there is no clear evidence:
- Actual measurements of inference-time memory consumption or latency are not provided—only theoretical estimations.
- The results in training/validation loss and perplexity do not provide variance, it's hard to understand whether they are marginal or not compared to baselines.
- Memory and time savings require quite strong constraints on the decomposition ranks, they should be very small to provide benefits with respect to other attentions (section 3.3, Appendix A). That would be nice to provide some understanding how rank values affect the quality. Also, very small ranks raise a question on efficient implementation on the modern GPUs.
Methods And Evaluation Criteria: On the evaluated benchmarks (fig 2, fig 3) known attention mechanisms still perform better for some tasks that would be nice to investigate in more details the memoyr/time/quality trade-off of the method.
Theoretical Claims: The paper presents a justification for RoPE compatibility with TPA, which looks well-motivated. Also, comparative mathematical formulations for different attention mechanisms are provided
Experimental Designs Or Analyses: Please, see the comments in Claims and Methods sections regarding the experiments
Supplementary Material: No, just appendix
Relation To Broader Scientific Literature: There are many papers incorporating tensorized layers inside the Transformer architecture https://arxiv.org/pdf/2302.09019 . That would be nice to see more insights on how the CPD factorization described in the paper is compatible with other decompositions, for example https://aclanthology.org/2022.emnlp-main.475.pdf.
Essential References Not Discussed: Please, see the Scientific Literature section
Other Strengths And Weaknesses: The method introduction is quite easy to follow, though the current version would benefit from more solid evidence of the theoretical claims.
Other Comments Or Suggestions: The approach of incorporating higher-order tensor factorizations inside neural networks looks promising, from the correlations capturing point of view https://arxiv.org/pdf/2310.04064. As a further research and paper improvement, there would be interesting to see more discussions/applications on higher-order TPA described in Appendix B
Questions For Authors: 1) I'm curious what are real memory and time saving achieved by the proposed TPA Factorized KV Caching method and how they correlate with the reported theoretical estimations.
2) In Table 2 of section Appendix G you mention that the training has been performed on 8 GPUs. Which type of parallelism have you used?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | null | null | null | null | null | null | |
DiMa: Understanding the Hardness of Online Matching Problems via Diffusion Models | Accept (poster) | Summary: The authors study the hardness of the online bipartite matching (OBM) problem using denoising diffusion probabilistic models (DDPMs). The DDPMs are trained using policy gradient to generate hard instances for OBM. For classic OBM problem this represents the hardest input showing the validity of this approach. Finally, this method is used to improve the hardness upper bounds for some variants of OBMs.
In particular, with random arrivals it improves the bound from 0.727 to 0.723, and with stochastic arrivals it improves the bound from 0.597 to 0.594.
Claims And Evidence: DDPM with specific fine tuning can help with generating hard instances in variants of the OBM problem. For OBM with random arrivals it improves the bound from 0.727 to 0.723, and with stochastic arrivals it improves the bound from 0.597 to 0.594.
This is proved by training DDPMs, and then proving the hardness theoretically on specific instances.
Methods And Evaluation Criteria: The method - training DDPM to find hard instances, and showing hardness of these instances - seems reasonable.
Theoretical Claims: The theoretical claims make sense at a high level. I have not checked the details of the proof.
Experimental Designs Or Analyses: The experiment design seems reasonable given the objective of finding hard instances.
Supplementary Material: I have not checked in detail.
Relation To Broader Scientific Literature: The application of AI to design hard instances to understand complexity is of interest to theoretical computer science community. However, I am not qualified to comment on it's importance.
Essential References Not Discussed: I am not familiar with the literature.
Other Strengths And Weaknesses: Strengths: The authors develop a new AI based techniques to produce hard instances. This seems to improve the state-of-the-art for two variants of the OBM problems.
Weakness:
- The resulting hard instances look very similar to the existing ones. Thus, although the methodology seems novel the results are probably underwhelming. Specifically, given the similarity simply tuning a few parameter could have given us the exact result.
- It seems we may not be able to discover novel settings by only mimicking the existing hard instances. I feel AI can be helpful only if we can somehow combine our intuitions (e.g. fine-tune with known hard instance) with novel exploration strategies. Without exploration we are bound to converge to known results.
- I could not find a proper mechanism to extrapolate the patterns found from finite sized instances to general instances (with n vertices, for any n). Why not train DDPMs to generate rules that can be extrapolated?
Edit after rebuttal: I read the clarifications provided by the authors. I appreciate the direction of the work even though I still maintain some of the reservations. I will keep my score.
Other Comments Or Suggestions: N/A
Questions For Authors: - Why the produced hard instances are close to the existing ones? Were they used for fine tuning? It seems exploration is lacking here.
- Can we prove the optimality of the existing algorithms on novel instances, even if they are found? If not then it feels there is a gap in the current approach.
Ethical Review Concerns: Theoretical paper. Hard to foresee impact.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate your constructive review and support of our work in applying AI techniques to theoretical computer science. This encourages us to study further in this direction. We hope the following responses will address your concerns and look forward to your ongoing support:
W1: While apparently similar, we believe the new hard instances fundamentally differ from existing ones and are non-trivial to identify because: For example, in the OMRA problem as demonstrated in Figure 5(a) and Figure 6, hard instances we find have an additional dense subgraphs in the bottom left corner, which exhibit distinct characteristics compared to known distributions. Theorem 5.1 and the Appendix provide rigorous theoretical proofs confirming they get a lower upper bound. Conventional hard instance construction relies heavily on expert-designed structures (e.g., thick-z to upper-triangular configurations in OBM problems), leaving a large search space of $2^{n/2}$ intermediate graphs. Our methodology overcomes this limitation through the combination of diffusion models and RL. By directly optimizing objective functions (competitive ratio) through reward-guided exploration, we have improved the efficiency of exploration while enabling the discovery of harder instances.
W2: We acknowledge that the exploration is necessary in discovery. Our DiMa actually employs an innovative exploration strategy. The exploration occurs during the fine-tuning process using RL. DiMa consists of two phases. The first phase pretrains a DDPM, aiming to learn some known hard distributions. Much more importantly, DDPM is known to be effective in generating **diverse** samples, which serves as an exploration. Subsequently, the RL fine-tuning process optimizes the sample generation by distilling those harder instances.
W3: Thank you for proposing the idea related to rule generation. This is actually a very interesting but till now still very challenging topic that we plan to leave as future work. In theoretical computer science, generalizing fixed graph size, say, $n$, to arbitrary graph sizes are roughly crafted by human expertise. However, very large instances are mainly constructed by repeating the small hard instance structures. Therefore, to facilitate rule generation, the fundamental step is to search and identify these hard structures (as what we did in this paper), which were previously hand-crafted based on expert insights. Our contribution is to propose a novel AI-assisted framework to achieve that goal. Nevertheless, we acknowledge that it is still a preliminary attempt in the direction of AI for TCS; We hope we figure out how to generate rules instead in the future.
Q1: As mentioned in the response for W1, the hard instances we constructed are not close to the existing ones. Additionally, these known hard instances are not used for fine-tuning. In fact, the known hard instances are used in the pretraining process of DiMa to train a DDPM, allowing it to learn the known hard distributions and generate diverse instances.
Q2: Yes, we prove the optimality of the existing algorithms on novel instances which is shown in Appendix F. More details is as follows:
- For OMSR, we prove in Appendix F.2 that Balance (the state-of-the-art algorithm) is optimal on the instances generated by DiMa.
- For OMRA, while no algorithm is proven universally optimal, our instances theoretically establish that Ranking’s competitive ratio degrades to 0.723—a new upper bound which is proved in Appendix F.1.
This ensures that the generated instances are not merely harder empirically but also theoretically valid.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their thoughtful comments. I agree that this direction of AI assisted TCS is exciting. However, I still have the following reservations
- relying on the inherent diversity of DDPM may not be effective, and we may end up at the local neighborhood of the hard instances used to pre-train the model.
- the requirement to show optimality of existing algorithms on novel instances keeps a reasonable burden out of AI's reach (can we combine reasoning models? requires some validation).
I will maintain my score.
---
Reply to Comment 1.1.1:
Comment: Great thanks for your kind acknowledgment and response. We hope the following response can address your concerns:
Q1: While the local optimality happens in nearly all AI-driven applications, our framework makes every effort to avoid such situations, such as training an RL-guided DDPM for generation or introducing appropriate randomnesses in the implementation. We clarify that our DiMa rarely relies on the initialized instances for DDPM pre-training, meaning that DiMa still works even if it starts with some random instances. See the rebuttal to Reviewer ibv8 in the 'Generalizability' paragraph for details and additional experiments. Further, in our first application, say, the classic online bipartite matching model, DiMa succeeds in converging at the very global optimum, i.e., the upper triangular instances, indicating the great potential of DiMa's capability in finding entirely novel instances. For the two open problems, we can not verify the optimality of our constructed instances, essentially because only when we know what the exact optimal algorithms are, meaning that the algorithmic lower bound meets the hardness upper bound, we could say that some hard instances reach the worst. However, this is another super challenging online matching theory task of independent interest. Nevertheless, even if they still fell into the local neighborhood, knowing the worse could be always on the way of knowing the worst.
Q2: We actually see the potential of reasoning models in our task, while it is kind of a mystery based on the current research. To the best of our knowledge, in very recent advances, reasoning models may succeed in some traditional mathematical proofs with some explicit sequential logics, after they see most of the standard proofs. However, in TCS, most tasks can be much more complex. While proving a bunch of lemmas is significant, verifying what to prove can be much more tricky and challenging. It relies on such as how to model the problem, how to write some linear or non-linear programs, or any other diverse mathematical tricks. In our motivation of hardness understanding, our method is to separate the hard instances construction as an independent subtask, which we surprisingly find and validate that AI techniques can help. However, in a broader scope of understanding online optimizations, to the best of our knowledge, how to benefit from AI gains little evidence. We thank the reviewer for the insight in mentioning reasoning models, and we plan to leave the attempt of reasoning models in online algorithms as one of the most valuable future directions for us. However, we tend to believe this is an independent story from the contributions we made in this paper.
We thank you again for your great efforts in reviewing our paper and really appreciate your valuable insights and discussions. | Summary: The paper presents a method based on a diffusion model trained using reinforcement learning. This model is then used to generate difficult instances for specific algorithms in online bipartite matching problems. The method successfully generates hard examples in two variants of the online bipartite matching problem, leading to an improved upper bound on the competitive ratio for these problems.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: The proofs are in the appendix, so I did not check the details.
Experimental Designs Or Analyses: Yes, I checked all the details in the main text.
Supplementary Material: None
Relation To Broader Scientific Literature: This paper improves the state-of-the-art by improving the upper bounds on the competitive ratio for two well-known variants of the online bipartite matching problem. Moreover, it is a valuable contribution to the development of applied methods that can improve theoretical analysis.
Essential References Not Discussed: Not that I am aware of.
Other Strengths And Weaknesses: This is a strong paper. It is impressive that generative models can be used to advance state-of-the-art theoretical results. Since the paper delivers on its promise, I did not find any obvious weaknesses.
Other Comments Or Suggestions: The authors devote a significant portion of the paper to explaining elementary details on diffusion models and RL fine-tuning. I believe this discussion could be condensed in the main text, with the details deferred to the Appendix. Instead, I would prefer to see a more in-depth discussion of the hard instances identified by their model in the main text.
Questions For Authors: For the instances that improved the state of the art, how many samples were generated before discovering the hard instance? Was it among the points in the $100$ trajectories generated during fine-tuning?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your constructive feedback and acknowledgment that "it is a valuable contribution to the development of applied methods that can improve theoretical analysis." We hope the following responses address your concerns and look forward to your ongoing support:
Comments or Suggestions: Before submitting our manuscript, we indeed faced a dilemma regarding the arrangement of content due to space constraints—whether to focus on the presentation of the AI-based framework or to delve into the theoretical analysis of the newly constructed hard instances. Ultimately, we chose the former based on the following considerations: (1) Given that the submission targets an international machine learning conference, we strive to enable researchers in the field to easily grasp the essence of our approach and appreciate the connection between our AI-based method and the theoretical domain; (2) Beyond the theoretical advancement, one of the major contributions of our work is the novel shortcut policy gradient (SPG) optimization method. We aim to present the essential details on diffusion models and RL fine-tuning in the preliminaries section to ensure that readers fully comprehend how SPG integrates into the framework. We greatly appreciate your encouragement to reconsider the content arrangement of our manuscript. In response, we have revised the manuscript by condensing the background explanations and moving them to the appendix while increasing the analysis of the newly constructed hard instances in the main text.
Q1: We claim that the hard instances were indeed generated during fine-tuning among the points in the 100 trajectories. In our framework, DiMa typically converges within 100 epochs and successfully identifies harder instances within the 10000 samples generated (100 trajectories × 100 epochs). During RL fine-tuning, we sample 100 trajectories per epoch to update the policy. This design stems from the inherent requirement of RL training, where gradient updates rely on accurately estimating the expected reward from a sufficiently large set of trajectories. Using fewer samples would result in a high variance in gradient estimation, leading to unstable policy updates. Through empirical validation, we found that sampling approximately 100 trajectories per epoch strikes a balance between stable policy convergence and computational efficiency. As the training progresses, the generated instances gradually converge toward harder candidates, and the final hard instances naturally emerge from the sampled trajectories in later epochs.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. I still think that there are too many elementary details in the paper's main text and not enough details of the actually interesting part. However, it is ultimately up to the authors to decide the best way to present their work. In light of their response, I would like to keep my original review and score.
---
Reply to Comment 1.1.1:
Comment: Thanks again for your kind review and valuable suggestion. We are continuing to polish our paper for better presentation. | Summary: The paper introduces a novel framework called DiMa, which enhances the theoretical understanding of Online Bipartite Matching (OBM) problems using diffusion models. DiMa models the generation of hard instances as a denoising process and optimizes them using a new reinforcement learning algorithm called Shortcut Policy Gradient (SPG). The framework is examined on the classic OBM problem, where it successfully reproduces the known hardest input instance. DiMa is also applied to two open-ended OBM variants, improving their theoretical state-of-the-art upper bounds by indentifying worst cases with even lower rewards.
Claims And Evidence: The paper provides promising evidence for the effectiveness of DiMa in improving the understanding of OBM problems. The results on reproducing known hard instances and improving bounds on OBM variants are compelling. However, further clarification and stronger evidence in areas such as hyperparameter sensitivity, generalizability, complexity, and a deeper analysis of the generated hard instances would make the claims even more convincing.
* The paper provides evidence that DiMa can reproduce the known hardest instance for the classic OBM problem, even without seeing such instances in the training set. For instance, it can discover triangular graph during the RL tuning stage, even if it's only trained with thick-z graph. This is a very important property, as it demonstrates the framework's ability to learn and generate known hard instances. For instance
* The paper presents results showing improved upper bounds for two OBM variants (online matching with random arrivals and online matching with stochastic rewards). Compared with the ICML 2024 paper, (Zhang et. al.), the instances found by DiMa outperforms this baseline in several different graph sizes. The improved bounds are supported by specific instances generated by DiMa, and the authors provide proof sketches in the appendix.
***Weakness***
* While the paper highlights the significance of selecting distribution q and tuning hyperparameters, the methodology for determining an 'appropriate' q and the sensitivity analysis of hyperparameters require further elaboration. The authors' use of thick-z, though empirically successful, necessitates expertise and may limit generalizability. This reliance on a specific distribution also potentially undermines the paper's goal of reducing the expertise required compared to the ICML baseline. A more detailed examination of the impact of these choices on the results and the method's robustness would be valuable.
* The computational complexity of DiMa, especially concerning the training of DDPMs and the fine-tuning process with RL, could be discussed in more detail. While the authors address some computational concerns with SPG, a thorough analysis of the method's scalability to larger problem instances would be valuable.
* While the paper claims to generate "novel" hard instances, it would be interesting to see a more in-depth comparison of the generated instances with existing ones. A more rigorous analysis of the differences and the specific properties that make the generated instances harder would further support this claim.
Methods And Evaluation Criteria: ***Proposed methods***
* Using Denoising Diffusion Probabilistic Models (DDPMs) to generate hard instances for Online Bipartite Matching (OBM) is a novel approach. DDPMs are known for their ability to generate high-quality samples while capturing the underlying distribution of the training data. In this context, it's reasonable to use them to generate challenging OBM instances by learning from a distribution of known hard cases.
* Fine-tuning the DDPM using reinforcement learning (RL) is also a sensible choice. RL allows the model to optimize the instance generation process based on a reward signal that reflects the hardness of the generated instances. This makes it possible to iteratively refine the instance distribution towards generating harder and harder cases.
* The proposed Shortcut Policy Gradient (SPG) algorithm addresses a key challenge in applying RL to this problem. The experimental results suggest that SPG outperforms other methods like DDPO, but such advantage is mainly brought by DDIM instead of a novel RL algorithm.
***Metrics***
* The competitive ratio (CR) is a standard metric for evaluating the performance of online algorithms, as well as OBM problems. It measures the ratio between the matching size found by an online algorithm and the optimal offline matching.
* The goal of the paper is to improve the theoretical upper bounds on the competitive ratio for OBM problems. Ideally, to evaluate the hardness of some OBM problems, the upper bound should be defined in terms of the problem instead of algorithms. But in some empirical studies (Sec 5.1 & 5.2), it seems like these CRs are evaluated with respect to some algorithms. But I think it's acceptable in some existing works.
In summary, the proposed methods and evaluation criteria are well-aligned with the goal of understanding and improving the hardness results for OBM problems.
Theoretical Claims: Upon reviewing Theorems 5.1 and 5.2, which assert a superior competitive ratio bound compared to prior research, the proof's logical progression appears sound. I presume the numerical results, derived from their equations, are accurate.
Experimental Designs Or Analyses: The experimental designs and analyses presented in the paper are generally sound and provide evidence for the effectiveness of DiMa. In their ablation studies, they also analyze the impact of parameter $\gamma$ or distribution q on the overall performance. The comparison of the proposed SPG algorithm with a baseline method (DDPO) provides evidence for its effectiveness by skipping some steps.
However, there are areas where further details, analysis, and more rigorous evaluation would enhance the validity and strengthen the conclusions. Addressing the points mentioned above, such as providing a more thorough hyperparameter analysis, justifying the choice of q (especially in more difficult senarios), and including generalizability analysis, would contribute to a more convincing and robust evaluation.
Supplementary Material: The supplementary code appears capable of reproducing the paper's empirical results. However, due to the requirement for parameter tuning and model training, I have not independently verified the results through execution.
Relation To Broader Scientific Literature: * The paper builds upon the emerging field of AI-enhanced combinatorial optimization, especially the online bipartite matching problems. It specifically acknowledges the work of Zhang et al. (2024a) as a recent attempt in this area, which uses reinforcement learning to improve the hardness result of an OBM model. DiMa contributes to this area by introducing a novel framework based on diffusion models (DDIM and DDPM).
* The use of denoising diffusion probabilistic models (DDPMs) connects the paper to the broader literature on generative models. DDPMs have been successful in various applications, particularly in image generation, and DiMa leverages their generative capabilities to create hard instances for OBM problems. The paper also draws inspiration from denoising diffusion implicit models (DDIM), which are known for efficient sampling.
* The fine-tuning process in DiMa utilizes reinforcement learning (RL), which connects the work to the extensive literature on RL algorithms and their applications. The paper also proposes a novel RL algorithm, Shortcut Policy Gradient (SPG), which builds upon existing policy gradient methods like REINFORCE
Essential References Not Discussed: Given this paper's focus on online bipartite matching (OBM) using machine learning, particularly within the 'ML for OBM' section of related works, including the following works on RL-based OBM algorithms would be beneficial.
* Alomrani, Mohammad Ali, Reza Moravej, and Elias B. Khalil. "Deep policies for online bipartite matching: A reinforcement learning approach." arXiv preprint arXiv:2109.10380 (2021).
* Li, Pengfei, Jianyi Yang, and Shaolei Ren. "Learning for edge-weighted online bipartite matching with robustness guarantees." International Conference on Machine Learning. PMLR, 2023.
Other Strengths And Weaknesses: N.A
Other Comments Or Suggestions: N.A
Questions For Authors: N.A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your comprehensive and insightful reviews and for positively evaluating our paper and contributions as “promising, compelling, and important”. We hope the following clarifications will address your concerns and look forward to your reconsideration of our work. Our supplemental results are in supplemental figures:https://anonymous.4open.science/api/repo/rebuttal_figure/file/Reviewer%20ibv8.pdf?v=8c96977d.
Hyperparameter sensitivity: Besides Figure 11 in the Appendix, we have added a more detailed sensitivity analysis of $\gamma$ in the ablation study (i.e., $\gamma$ varying from 0 to 1 at 0.05 intervals). Our DiMa easily converges to the worst upper-triangular instances within 100 epochs when $\gamma \in [0.2, 0.35]$, while for small ($\gamma < 0.10$) or large ($\gamma > 0.50$) values, it fails to converge even after 500 epochs. See Figure 1 in supplemental figures.
Generalizability: We think there might be a misunderstanding in DDPM initialization. We clarify that DiMa can converge to upper triangulars starting with diverse distributions (not restricted to the thick-z), even with random samples. Our main argument is that leveraging structural information from known hard instances, which is easily obtained in literature, may facilitate the fine-tuning process to discover new harder instances. Such an idea also aligns with conventional hand-crafted constructions in OBM, where new harder instances are typically built on the existing ones with some slight modifications. Nevertheless, we thank you for pointing out this potential ambiguity, and we have added detailed explanations in the experimental setup. We also provide the following evidence to further support them: (1) We visualize some of initialization distributions that successfully converge to upper triangulars, including randomly sampled ones (Figure 2 in supplemental figures); (2) We conduct additional experiments starting with 50 random samples, nearly one thirds of which enables DiMa to find the worst instances within 100 epochs. Finally, we emphasize that unlike [Zhang et al.], which appears to heavily depend on intermediate observations for iterative adjustments during the training process to discover novel instances, our method largely reduces reliance on expert insights because existing hard instances are easily obtained in literature. Although DiMa can benefit from known hard instances, it is essentially end-to-end.
Complexity: Thank you for your acknowledgment of the effectiveness of our SPG in reducing computational costs. We propose SPG as an independent contribution addressing a common challenge in the ML-for-OM literature. Traditional methods struggle with large-scale graphs. For example, in [Zhang et al.], their RL approach for OMSR seems limited to small graphs (less than 10 offline vertices). In contrast, DiMa efficiently works on remarkably larger instances (of size larger than 50) on a 24GB GPU within an hour. More experimental costs of DiMa across various graph sizes are presented in the table below:
| Method | Graph size | Memory size | Running time |
| :---------------------- | :--------: | :---------: | :----------: |
| SPG | 12x12 | 0.62GB | 20s/epoch |
| SPG | 20x20 | 1.14GB | 25s/epoch |
| SPG | 40x40 | 5.05GB | 35s/epoch |
| SPG | 80x80 | 22.59GB | 45s/epoch |
| Traditional Computation | 20x20 | >24GB | -- |
We thank the reviewer for pointing this out and will add a separate discussion to highlight such strength of our DiMa.
Analysis of hard instances: We do identify properties distinct from those reported before. For example, in OMSR, our baseline of [Zhang et al.] claims they observe two primary properties named Consistency and Exclusivity property. We actually discover a more fine-grained property critical for harder instances, which is what we call the significance of dense subgraphs. While the ICML baseline seems to implicitly suggest that including some dense structures while ensuring sparsity in other parts is helpful for hardness construction, we further observe the impact of the proportion and location of dense subgraphs in generating harder instances. Similar findings also arise in OMRA. The proofs of Theorems 5.1 and 5.2 are built on these findings distilled from learned instances (in Appendix F). We appreciate the reviewer's suggestion and have added a more detailed discussion in the 'proof sketch'. Additionally, we have added an extra discussion at the end of Section 5 to clarify the differences between our property and the baseline.
Related work: Thank you for providing valuable literature. We have added these two papers to the related work section ('ML for OBM' paragraph).
---
Rebuttal Comment 1.1:
Comment: I really appreciate the author's rebuttal and efforts in additional experiments. I think most of my concerns are addressed and I will raise my evaluation accordingly.
---
Reply to Comment 1.1.1:
Comment: Thanks again for your thorough review and the acknowledgment of our rebuttal. | Summary: In this paper, they train a diffusion model to construct hard instances for Online Bipartite Matching problem.
Using the proposed method they find state-of-the-art upper bounds for the random arrivals and stochastic arrivals variants of Online Bipartite Matching problem.
For the training of the diffusion model they propose a reinforcement learning technique named shortcut policy gradient.
Claims And Evidence: The main claim of the paper is the construction of hard instances for two variants of Online Bipartite Matching.
Apparently, this claim is clear and has convincing evidence to support it.
What I am not sure about is the usefulness and generalizability of the proposed method.
It seems that it finds instances very close to already known hard instances.
Hence, it is not clear whether this can enable any progress in the theoretical understanding of the problems.
Also, it is not clear how this method compares to other intuitive methods that one might try in order to find hard instances.
For example, it seems possible that applying MCTS (or some other search technique) we might also find harder instances than the known hard instance. (see also the Questions section)
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense.
Theoretical Claims: N/A. The paper is mostly experimental.
Experimental Designs Or Analyses: I didn't find any issue in the experiments.
Supplementary Material: I didn't review the supplementary material.
Relation To Broader Scientific Literature: The main related work in the domain of AI-enchanced combinatorial optimization theory is that of Zhang et al. (2024a).
This paper provides a more structured approach for the problem of interest by improving various parts of the previous approach.
In the domain of online combinatorial optimization and specifically online bipartite matching this paper provides state-of-the-art upper bounds for two variants of the problem.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: I think some points need further clarification. See the Questions section.
Other Comments Or Suggestions: Some minor errors/typos:
- Move Algorithm 1 to the top of the page
- line 260: "at the neighbor" -> "at the neighborhood"
- line 174: "we without loss of generalization assume" needs rephrasing
Questions For Authors: - In equation (6), you state that $J_{RL}(\theta) \propto J_{CR}(\theta)$. However, $J_{RL}(\theta)$ only takes into account the reward of the algorithm and not the reward of the optimal solution. Hence, are these two quantities proportional?
- Did you try applying the proposed technique to other problems as well? Will applying your method to other problems require any modifications?
- Can you explain how the formulatation of the reverse denoising process MDP (section 4.2)?
- Is there any particular reason you used the rounding in equation (9). For example, you set $\hat{I}_{ij}$ probabilistically and not according to a predetermined threshold.
- Do you have any understanding about the hard instances your model found? Do you think they can be used to make progress in the theoretical study of the problem?
- What will happen if you train your model based on the new hard instance you constructed? Do you expect it to find an even harder instance? Does this approach make sense? Please explain.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your thorough review and positive review of our work as "clear and convincing". We hope the followings address your concerns and look forward to your kind reconsideration of our work:
Generalizability(also Q2): We believe our DiMa demonstrates strong generalizability. We evaluate DiMa on **three** different OBM problems, among which, two **theoretical SOTAs** are improved (See Appendix F for formal proofs). In particular, the upper bound of OMRA has not been improved since 2011. Similar previous works (such as [Zhang et al.] or [Kong et al.]) can either work on one specific problem or just reproduce (not improve) the known results. Further, our DiMa can be easily adapted to other OBM problems with slightly different implementations, including initializations and hyper-parameters. Finally, though the learned hard instances look similar to the known ones (they are actually not quite similar), conventional search methods like MCTS may behave like brute forces, which are super inefficient in finding worse instances. In contrast, our RL-guided DDPM framework generates low-CR instances through objective-driven rewards, enabling efficient targeted exploration toward harder instances. To further validate the generalizability of DiMa, we evaluate it on another famous AdWords problem, which will be included in the Appendix. See supplemental figures: https://anonymous.4open.science/api/repo/rebuttal_figure/file/Reviewer%20zCLi.pdf?v=d722f741 for details.
Q1: We thank the reviewer's insight on the potential ambiguity of reward calculation. CR requires comparing algorithmic solutions with offline optimum (OPT). In our paper, to simplify computation, we constrain all instances to have a perfect one-to-one matching in OPT, such that the OPT always equals the number of offline vertices. Such treatment aligns with nearly all (theoretical or AI-driven) prior works. To mitigate this potential ambiguity, we have added detailed explanations of CR to Section 3.1.
Q3: The MDP of the reverse denoising process is:
- State: Represent as a tuple $(\mathcal{I}_{T-t},T-t)$ (current noised instance and timestep index). This fully captures the system state without historical information, satisfying the Markov property.
- Action: The role of the policy $\pi_\theta$ at each step $t$ is to denoise the current noised instance, and the resulting denoised instance $\mathcal{I}_{T-t-1}$ is the action we take.
- Reward: Rewards are focused on the quality of $\mathcal{I}_0$, with no rewards for intermediate steps.
- Transition: The next state $ s_{t+1} = (\mathcal{I} _ {T-t-1}, T-t-1 )$ can be uniquely determined by $P(s_{t+1}|s_t,a_t)$.
- Policy: The denoising network $p _ θ(\mathcal{I} _ {T-t-1}|\mathcal{I} _ {T-t})$ determines how to generate the next denoised instance, which is the specific implementation of the policy.
We have updated the above to Section 4.2.
Q4: We actually tried both probabilistic sampling and predetermined thresholding for rounding. While both methods can work, the thresholding method is much more stable and easier to implement. In contrast, probabilistic sampling results in higher variance in the reward distribution, leading to increased bias in policy gradient estimation and a reduction in the effective step size of policy updates.
Q5: The hard instances found by DiMa correspond to **theoretical** upper bounds (with formal proofs in Appendix F) of both OMSR and OMRA, enhancing the theoretical understanding of these two open problems. Our proofs benefit from some structural properties observed and distilled from the learned instances. For example, in OMSR, we identify a fine-grained property named the significance of dense subgraphs. We observe that it is crucial to include some dense structures while ensuring sparsity in other parts. Further, we also see the impact of the proportion of dense subgraphs in obtaining harder instances. The proof of Theorem 5.2 is built on these findings distilled from learned instances.
Q6: We thank the interesting question. We actually once tried to continuously train DiMa on the new hard instances we found. However, it fails to produce any harder ones. We expect we have obtained the hardest instances, even if we can not verify them now unless the optimal algorithms are found. Nevertheless, we acknowledge that this is a valuable question that can further support the effectiveness of our DiMa, and we have included it as a separate discussion.
[Zhang et al.] : Zhang Q, Shen A, Zhang B, et al. Online matching with stochastic rewards: provable better bound via adversarial reinforcement learning. ICML 2024.
[Kong et al.] : Kong W, Liaw C, Mehta A, et al. A new dog learns old tricks: RL finds classic optimization algorithms. ICLR 2018. | null | null | null | null | null | null |
Quantifying Treatment Effects: Estimating Risk Ratios via Observational Studies | Accept (poster) | Summary: This paper develops novel estimators for the Risk Ratio (RR) in observational studies to accommodate confounding in non-randomized settings. It introduces estimators based on inverse propensity weighting, the G-formula, and doubly robust techniques, and establishes their asymptotic normality. Simulation studies and a real-world application demonstrate that their estimators yield valid and efficient treatment effect estimates.
Claims And Evidence: The claims made in the paper are well supported by both theoretical derivations and simulation studies.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are standard and make sense for the problem at hand.
Theoretical Claims: I did not check the correctness of any theoretical claims.
Experimental Designs Or Analyses: N/A
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: The paper’s key contributions are well presented and well connected to the broader literature on treatment effect estimation.
Essential References Not Discussed: All the essential references were discussed.
Other Strengths And Weaknesses: The writing of figure captions could be improved for greater clarity and context.
For example, the caption for Figure 1 does not explain what the different colors represent.
Other Comments Or Suggestions: N/A
Questions For Authors: - Equation (17) requires consistency on both nuisance function estimators, yet in classic doubly robust settings it is often sufficient for only one to be consistent. Could you explain why both conditions are imposed here—does the non-linear nature of the risk ratio functional necessitate that both the propensity score and outcome model be estimated consistenly, or could a relaxation where only one of them is consistently estimated still yield valid inference?
- When the baseline risk is very low, the risk ratio may be unstable. Could you provide additional simulations to show the coverage of your estimator as the baseline risk approaches zero?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: - **Figure Caption Clarity:**
Thank you for pointing this out. We agree that the figure captions could be clearer. In particular, we will revise the caption to explicitly mention that the colors represent the "Sample Size".
- **Doubly Robust Conditions:**
You are absolutely right that in classical doubly robust settings, it is sufficient for only one of the nuisance estimators (either the outcome regression or the propensity score) to be consistent for the overall estimator to remain consistent. This is commonly referred to as *weak double robustness*.
As discussed in Wager's *Causal Inference Book* ([Section 3](https://web.stanford.edu/~swager/causal_inf_book.pdf)), it is helpful to distinguish between **weak** and **strong** double robustness:
- **Weak double robustness** refers to the property that the estimator is consistent if either the outcome model or the propensity score model is estimated consistently. However, weak double robustness does not, in general, guarantee asymptotic normality or valid confidence intervals under model misspecification.
- **Strong double robustness**, on the other hand, refers to the setting where asymptotic normality can be achieved if both nuisance functions are estimated consistently at sufficiently fast rates. Specifically for the classical RD setting, this holds when both $\hat \mu_{(t)}$ and $\hat{e}$ are estimated with root-mean squared error rates decaying faster than $n^{-\alpha_{\mu_t}}$ and $n^{-\alpha_e}$ respectively, and when $\alpha_{\mu_t} + \alpha_e \geq \frac{1}{2}$ for each $t \in \{0, 1\}$. Then AIPW is asymptotically normal.
In our paper, we establish *strong double robustness* for RR-AIPW. This result relies on both Equation (17) and Equation (14), which are strictly equivalent to the classical conditions for strong double robustness. It is worth noting that weak double robustness is also achieved for RR-AIPW but not for RR-OS since we always need $\hat{\mu}_{(0)}$ to be consistent.
- **Low Baseline Risk:**
Thank you for raising this important point. When the baseline risk is extremely low, the risk ratio becomes inherently unstable, often resulting in wide confidence intervals. We have explored this empirically, and the results are shown in the table below.
As the baseline risk approaches zero, the estimated RR can become extremely large, while the absolute risk difference remains small: different causal measures lead to different interpretations, thus suggesting that several causal measures should be used simultaneously to properly understand the impact of a treatment.
This case also illustrates why it is important to report both an absolute measure (e.g., risk difference) and a relative one (e.g., risk ratio), as they provide complementary perspectives on treatment effects—particularly when baseline risks are extreme.
| **Baseline** | **RR** | **Estimator** | **Coverage (mean ± std)** | **Length (mean ± std)** |
|--------------|----------|---------------------|----------------------------|----------------------------|
| 2.5 | 1.77 | Linear RR-G | 0.878 ± 0.113 | 0.099 ± 0.002 |
| 2.5 | 1.77 | OLS RR-G | 0.948 ± 0.050 | 0.120 ± 0.003 |
| 2.5 | 1.77 | Linear RR-AIPW | 0.968 ± 0.047 | 0.185 ± 0.007 |
| 0.045 | 15.3 | Linear RR-G | 0.992 ± 0.027 | 3.728 ± 0.513 |
| 0.045 | 15.3 | OLS RR-G | 0.984 ± 0.037 | 3.037 ± 0.435 |
| 0.045 | 15.3 | Linear RR-AIPW | 1.000 ± 0.000 | 4.734 ± 0.735 |
| 0.0018 | 112.1 | Linear RR-G | 0.987 ± 0.022 | 884.427 ± 206.672 |
| 0.0018 | 112.1 | OLS RR-G | 0.978 ± 0.032 | 1026.384 ± 214.754 |
| 0.0018 | 112.1 | Linear RR-AIPW | 0.972 ± 0.036 | 2073.229 ± 509.709 |
**Table:** Coverage and length of confidence intervals for different RR estimators across varying baseline risks. | Summary: The authors propose several estimators for the average risk ratio (RR). They begin by analyzing the RR version of the Neyman estimator under standard causal inference assumptions, proving its asymptotic normality and deriving an expression for its variance. They then extend this analysis to an RR variant of the Horvitz-Thompson estimator.
Next, the authors refine their results by assuming the true propensity scores follow a logistic model and fitting estimators using maximum likelihood estimation (MLE). They further establish the asymptotic normality and derive an explicit variance expression for an RR version of the G-formula—first using the true feature probabilities and then with an estimator where the conditional expectation is obtained via least squares.
The paper concludes with simulations and a real-world application using a dataset on traumatic brain injury.
Strengths
-The paper is well-executed, providing a thorough set of theoretical results and strong empirical validation, including a real-world case study.
Weaknesses
-The properties of RR estimators may already be well known, albeit I am not a causal inference expert.
Claims And Evidence: Refer to summary
Methods And Evaluation Criteria: Refer to summary
Theoretical Claims: Refer to summary
Experimental Designs Or Analyses: Refer to summary
Supplementary Material: I did not.
Relation To Broader Scientific Literature: In the introduction of this paper by Rose and Van Der Laan (2014) https://sci-hub.ru/https://doi.org/10.1093/aje/kwt318 , they talk about how there are methods to estimate the risk-ratio. Also I think Kennedy has a paper where he proves properties of the risk ratio condoned on the features.
Essential References Not Discussed: refer to Broader Scientific Literature
Other Strengths And Weaknesses: Refer to summary
Other Comments Or Suggestions: No extra comments
Questions For Authors: No questions for the authors
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: - **Clarifying the Novelty and Contribution:**
We thank you for acknowledging the thoroughness of both the theoretical development and empirical validation. We would like to take this opportunity to clarify the main contribution of the paper.
One of the main contributions of this paper is to provide a comprehensive theoretical and empirical analysis of RR estimators, including new ones like RR-OS and RR-AIPW, in the context of observational studies. While RR is a well-established estimand, its estimation introduces distinct challenges due to its non-linearity.
To the best of our knowledge, there has not been a unified and rigorous asymptotic analysis of IPW, G-formula, or AIPW estimators for the marginal RR in the literature. Our work aims to fill this gap by deriving explicit variance expressions, exploring efficiency bounds, and validating the estimators both theoretically and empirically in both continuous and binary outcome settings.
- **On the Work by Rose and van der Laan (2014):**
The cited work by Rose and van der Laan focuses on Targeted Maximum Likelihood Estimation (TMLE) for the risk difference in case-control studies. While this is an important contribution, the setting considered there differs from the randomized and observational study framework analyzed in our work and focuses on binary outcomes. Furthermore, while TMLE is a flexible approach, our paper focuses on IPW, AIPW, and G-formula estimators, whose asymptotic properties have not, to the best of our knowledge, been rigorously established in the context of marginal RR estimation using observational data — also for continuous outcomes. We thank the reviewer for pointing out this reference; we will mention it in the conclusion as a potential research direction for finite sample studies.
- **On the Mention of Kennedy’s Work:**
We are not entirely sure what specific paper by Kennedy you refer to. We have read several papers by Kennedy, three of which are cited in our work, but we did not encounter specific studies by Kennedy (or anyone else) analyzing different estimators of the marginal risk ratio. That said, it is worth noting that the conditional risk ratio is not directly collapsible. This means that estimating the conditional RR generally does not provide a valid estimator for the marginal RR. In contrast, our work specifically targets estimators for the marginal RR and provides a detailed asymptotic analysis under standard causal inference assumptions.
We hope that the above points help to dispel your doubts about the novelty of our work. | Summary: The authors discussed theory of risk ratio estimation in observational data. Several RR estimators are proposed with theoretical investigation, including asymptotic normality and confidence intervals. Two doubly robust estimators are proposed and the authors recommended the use of one of them.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes, I have checked all the proof.
Experimental Designs Or Analyses: Yes - the experimental design makes sense to me.
Supplementary Material: No supp material found.
Relation To Broader Scientific Literature: Adding contribution to estimation of risk ratio in observation study.
Essential References Not Discussed: Not aware of any.
Other Strengths And Weaknesses: Strength: the work is very clearly written, and is a good addition to the cauasl inference literature for estimation of RR. I am very familiar with the observational study literature and the results for the different estimators are pretty natural. Overall I am positive about the paper and I will skip technical questions here.
Weakness:
1. Though in assumption it is assumes that conditional expectation of $Y(0)$ given $X$ is greater than zero, this is never guaranteed in finite sample using say AIPW. How should one control for this?
2. in term of simulation, I am interested in seeing the performance of RR-OS estimator and RR-AIPW under some wrong model misspecifications.
3. the whole paper did not discuss the implementation of several variance estimators. For example, how do we implement $V_{RR,OS}$.
Other Comments Or Suggestions: 1. Assumption 2.2, part (iii): independence between treatment assignment is already guaranteed by iid part.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and constructive review. We appreciate your positive comments on the clarity of the writing and the contribution of our work to the causal inference literature. Please find our responses to your comments below:
1. **Regarding the assumption $E[Y(0) \mid X] > 0$:**
Our initial motivation for introducing it was to establish finite-sample bias and variance results, which are currently included only in the appendix. However, for the main theoretical results, we only need to assume $E[Y(0)] > 0$ to ensure that the risk ratio is well-defined.
We agree that this assumption may not always hold, but in practice, many biological quantities are positive (e.g., blood concentrations). Besides, if the variable $Y(0)$ is binary, $E[Y(0)] > 0$ holds as soon as one observation satisfies $Y_i(0) = 1$, which seems to be a reasonable assumption.
In the revised manuscript, we will move the stronger assumption $E[Y(0) \mid X] > 0$ to the appendix (where it is needed) and replace it in the main text with the weaker and more general condition $E[Y(0)] > 0$. We will also add the above discussion.
2. **On model misspecification in simulations:**
We completely agree that comparing the estimators under various forms of model misspecification is important. Due to space constraints, we were unable to include this analysis in the main body of the paper, but we incorporated two relevant scenarios in Appendix 8.2:
- **Wager C model (Appendix 8.2.1):**
Both response functions are nonlinear, while the propensity score is logistic. In this setting, Logistic RR-IPW and Linear RR-AIPW and RR-OS (using linear and logistic regressions) remain asymptotically unbiased, while Forest RR-OS and Forest RR-AIPW are biased for small sample sizes due to the difficulty of forests in estimating logistic functions with few observations.
- **Wager A model (Appendix 8.2.2):**
Both response functions and the propensity score are nonlinear/non-logistic. All linear estimators are biased in this case, but estimators that use Random Forests to estimate the nuisance functions exhibit decreasing bias as the sample size increases.
If the paper is accepted, we will use the additional page to move this analysis into the main paper.
3. **On the implementation of variance estimators:**
Thank you for pointing this out. In fact, we designed variance estimation procedures to construct the confidence intervals reported in the experiments, but we recognize that the implementation details were not highlighted enough.
In the current version, the relevant formulas are in the appendix (in Section 4.2). We will elaborate on this and add a dedicated section in the Appendix. In addition, we will better highlight in the main text that we have estimated all variances, as they can all be expressed using estimable quantities, and that we used these estimations in all our simulations.
4. **Redundancy in Assumption 2.2:**
You are totally right. We will revise this assumption to remove the redundancy. Thank you for pointing this out!
---
Rebuttal Comment 1.1:
Comment: Thanks for the careful responses. The authors have addressed my concerns and I have no further comments/questions. | Summary: Authors focus on risk ratio (RR), a measure of treatment effectiveness complementary to the more common "risk difference". They first analyze the standard estimators for RR in an RCT and derive some new asymptotic normality/variance results, such as for continuous outcomes in addition to binary outcomes. They then develop IPW, regression function, and AIPW estimators for the RR in the context of an observational study where one needs to adjust for confounders and to estimate nuisance functions. They derive asymptotic unbiasedness and normality results of their estimators under the assumption that the true underlying outcome and propensity score models are (generalized) linear models and derive an expression for the asymptotic variance. They also derive, using the influence function theory, and efficient AIPW estimator.
Claims And Evidence: Theoretical results are well presented and supported.
Methods And Evaluation Criteria: Yes. See experiments and benchmarks section below.
Theoretical Claims: I did not check in detail. The parts I quickly skimmed looked fine and the proofs in the appendix seem to be well-written.
Experimental Designs Or Analyses: Synthetic setup makes sense and supports the asymptotic unbiasedness results of proposed algorithms, except for the Logistic RR-IPW. Although that is not surprising to me as IPW estimators almost always have very high variance with finite samples in practice. Authors could also plot histograms instead of boxplots to demonstrate asymptotic normality of their estimators.
Supplementary Material: I took a quick look at some proofs. Did not spend much time though.
Relation To Broader Scientific Literature: Formal results on RR estimators could be very useful to practitioners in practice. Methodologically, most of the techniques used are standard and does not reveal new insights, except may be for the results that require the derivation of influence functions for the RR estimand.
Essential References Not Discussed: I am not an expert on the literature about RR, but was satisfied with how the related work was covered.
Other Strengths And Weaknesses: The paper is very well-written and structured. It is self-contained in terms of the statistical methodology it uses, which could be very helpful to those who are not very familiar this type of (e.g., CLT) results/derivations.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your positive review. We are glad that you found the paper well-written, well-structured with interesting results for practitioners. We also appreciate your suggestion regarding the use of histograms to illustrate the asymptotic normality of our estimators. We agree that this would provide a clearer visual understanding of the results and plan to incorporate it in the next revision. | null | null | null | null | null | null |
FairPFN: A Tabular Foundation Model for Causal Fairness | Accept (poster) | Summary: FairPFN is a tabular foundation model designed to address algorithmic bias in machine learning without requiring prior knowledge of the underlying causal model. Existing causal fairness methods rely on predefined causal structures, limiting their applicability in complex real-world scenarios. FairPFN overcomes this by pre-training a transformer on synthetic causal fairness data, where protected attributes are modeled as binary exogenous causes. This foundation model allows FairPFN to identify and mitigate the causal effects of protected attributes using only observational data. The model demonstrates strong performance in both fairness and predictive accuracy across hand-crafted and real-world fairness scenarios, outperforming robust baseline methods. By removing reliance on manually specified causal models, FairPFN makes causal fairness more accessible and provides a scalable solution for mitigating algorithmic discrimination in critical applications.
Claims And Evidence: The paper presents FairPFN as a foundation model for causal fairness, claiming it effectively identifies and removes the causal effects of protected attributes without requiring prior causal knowledge. The empirical results likely support its strong performance in fairness and predictive accuracy across diverse scenarios.
Methods And Evaluation Criteria: Somehow, it makes sense.
Theoretical Claims: 1. Theorem 3.1 can be relaxed to form dataset-level counterfactual fairness metrics, such as the Absolute Error (AE) between predictive distributions on real and counterfactual datasets, Equation (2). When the author tries to extend counterfactual fairness to the dataset level, why is the sensitive attribute changed under X rather than Y^hat, as described in Theorem 3.1?
2. [089-091] Counterfactuals evaluate the impact of interventions on outcome variables. Counterfactuals are not the same as interventions. Please specify these two definitions more carefully.
Experimental Designs Or Analyses: 1. How do we measure counterfactual fairness? The paper uses ATE, but ATE is not equivalent to counterfactual fairness. Feature Correlation can only show the assoication relationship rather than calusa relationship.
2. Causal case studies all focus on sample cases, but what about complex causal structures?
Supplementary Material: Yes.
Relation To Broader Scientific Literature: This work addresses a key limitation in existing causal fairness frameworks, namely, the requirement for prior knowledge of the causal graph.
Essential References Not Discussed: Probably not.
Other Strengths And Weaknesses: 1. This paper is likely an incremental work that applies PFN-related ideas to causal fairness. However, it is well-presented and open-source.
2. The title claims 'Causal Fairness,' while the paper mainly focuses on counterfactual fairness. Please be aware of the distinction between these two definitions.
3. Intervention and counterfactual are two different levels; please be aware and do not mix them up.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. How does the model handle cases where the real-world bias patterns differ from the assumed causal bias model?
2. Can the proposed tabular foundation model work with continuous sensitive attributes? (since it is a tabular foundation model)
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We would like to thank you for your detailed response! We have outlined our clarifications and proposed changes below:
> When the author tries to extend counterfactual fairness to the dataset level, why is the sensitive attribute changed under X rather than Y^hat, as described in Theorem 3.1?
We apologize for the inconsistency in notation. We propose to adopt Plecko et al.'s (2023) notation for unit-level/probabilistic and population-level counterfactual fairness, clarifying that AE is a metric form of their population-level version.
* **Unit-Level/Probablistic:** $P(y_{A\rightarrow a}(u) | X = x, A=a) = P(y_{A\rightarrow a'}(u) | X = x, A=a)$
* **Population-level:** $P(Y_{A\rightarrow a} | X = x, A=a) = P(Y_{A\rightarrow a'} | X = x, A=a)$
* **Absolute Error (AE)** $$AE = |P(Y_{A\rightarrow a} | X = x, A=a) - P(Y_{A\rightarrow a'} | X = x, A=a) |$$
> Counterfactuals evaluate the impact of interventions on outcome variables. Counterfactuals are not the same as interventions.
We appreciate your comment regarding the important distinction between interventions and counterfactuals. We propose to highlight this distinction in the Background section, also noting that counterfactuals crucially hold noise terms constant and thus require full knowledge of the causal model. We will also remove the term "interventional datasets" instead correctly referring to them as counterfactual.
> The title claims 'Causal Fairness,' while the paper mainly focuses on counterfactual fairness.
Thank you for bringing up this question of framing. We understand counterfactual fairness as a specific causal fairness criteria. We believe that this work falls under the umbrella term of "causal fairness" for the following reasons:
1. Our pre-training objective is to make predictions from a modified SCM with outgoing edges from the protected attributes removed. This deviates from the methodology proposed in the original counterfactual fairness paper (Kusner et. al. 2017)
2. One of our evaluation measures is average treatment effect (ATE), which to your point is not a counterfactual fairness measure
3. Our key contribution is that causal effect removal can be performed from observational data alone. This contribution is relevant not only to causal fairness, but to the point of other reviewers could be of great impact to causal ML as a whole.
> How do we measure counterfactual fairness? The paper uses ATE, but ATE is not equivalent to counterfactual fairness.
Thank you for bringing up this point of confusion! We define the metric Absolute Error (AE) between predictions on observational and counterfactual datasets to measure counterfactual fairness, not ATE. AE is calculated as follows:
1. Predict with natural demographics and features
2. Predict with counterfactual demographics and counterfactual features
3. Take the absolute difference
This process is repeated over the samples in a dataset and is visualized in distribution form as in Figure 7.
> Feature Correlation can only show the association relationship rather than causal relationship.
This is a very valid point. To the best of our knowledge Figure 8 has not made a causal claim. However, we propose explicitly stating that this result should not be interpreted causally.
> Causal case studies all focus on simple cases, but what about complex causal structures?
This comment poses an interesting research question! In order to explore FairPFN's performance on more complex causal structures, we have sampled increasingly complex SCMs (up to 200 nodes) from our prior, and evaluated our model's performance. In the figure linked below, we observe that as SCM size increases, accuracy drops while fairness remains relatively constant. This is an interesting insight that we would include in the appendix.
https://anonymous.4open.science/r/FairPFN-supplementary/figures/complexity.png
> How does the model handle cases where the real-world bias patterns differ from the assumed causal bias model?
This is an important question! We have created a new synthetic case study "Endogenous Protected Attribute" showing that when protected attributes are confounded by unobserved variables, FairPFN reverts to an unfair classifier.
https://anonymous.4open.science/r/FairPFN-supplementary/figures/endogenous.pdf
We also refer to our response to reviewer GboP regarding an additional case study on intersectionality.
> Can the proposed tabular foundation model work with continuous sensitive attributes?
Currently FairPFN handles binary protected attributes, but can be extended to continuous attributes by not binarizing values of $A$ during pre-training. We propose to including this in an appendix section "Future Extensions"
**References**
1. Plecko, D et. al. Causal fairness analysis. Foundations and Trends in Machine Learning, 17:304–589, 2024, https://arxiv.org/abs/2207.11385
2. Kusner, M., et. al. Counterfactual fairness. NeurIPS’17 pp. 4069–4079, 2017. https://arxiv.org/abs/1703.06856 | Summary: Let $A$ denote (binary) protected attributes, $X$ features, and $Y$ a binary response variable. A FairPFN is a transformer trained on synthetic data in such a way that when conditioned on $(A, X_{bias}, Y_{bias})$ it is encouraged to complete a query from the same distribution, denoted $X_{bias}^{val}$ with not $Y_{bias}^{val}$ but $Y_{fair}^{val}$. After extensive pretraining, the FairPFN can be used to predict in a casually fair manner. Specifically, given a new dataset $\mathcal D$, we pass it to the FairPFN as context and then complete new queries in an ICL manner without any updates to the FairPFN.
## update after rebuttal
I maintain my score and positive impression of the paper.
Claims And Evidence: I am uncomfortable with presenting the FairPFN as learning some sort of Bayesian posterior predictive distribution. I think this would be true if the pretraining loss employed was $L(Y_{pred}, Y_{bias}^{val})$. However the actual pretraining loss employed is $L(Y_{pred}, Y_{fair}^{val})$.
In Line 238, “FairPFN thus approximates a modified PPD,...,” pointing to the PPD in Equation 3. This is not mathematically true, or at least I fail to see how this could be true. The pretraining of PFNs no longer targets the underlying Bayes PPD when the query undergoes a distribution shift relative to the context.
Methods And Evaluation Criteria: The method here consists of synthetic data generation followed by FairPFN pre-training on the synthetic data. This part is solid.
The baselines, synthetic and real datasets considered in the Experiments also seem sound to me.
Theoretical Claims: The only theoretical claim is that the FairPFN is approximating the PPD in (3) which I have doubts about, see comment under Claims and Evidence.
Experimental Designs Or Analyses: The experimental design and analyses are sound. A wide array of synthetic and real world data sets are considered. The evaluation metrics are sensible and align with standard ones studied in the fairness literature.
Supplementary Material: No
Relation To Broader Scientific Literature: This work borrows existing notions of causal fairness and really pushes what can be done by leveraging the power of Transformers. Notably, no algorithm for learning the underlying DAG is needed, allowing us to bypass a key challenge in causal fairness.
Essential References Not Discussed: I think all relevant references are cited and discussed.
Other Strengths And Weaknesses: I really enjoyed reading this paper. I think it’s a big step forward for causal fairness research. I can envision many future papers inspired by this work, for instance where we use other notions of causal fairness to generate synthetic data.
The main weakness that I see is related, again, to the claims that FairPFN is approximating a “modified” Bayes posterior predictive density (PPD). If it is, what is the likelihood-prior pair that underlies said modified Bayes PPD? I don’t think this question needs to necessarily be answered in this paper, but the paper certainly should avoid any chance of being misleading on this matter.
Other Comments Or Suggestions: * Figure 1 is visually appealing, but I’d still prefer to understand the data generating mechanism through formal equations which is currently missing in the early parts of Section 4.1
* Might be useful to define $A$ in Algorithm 1
Questions For Authors: 1. Would you consider writing a few comments on extending the current methodology to regression settings, i.e., when $Y$ is real-valued. Should we be considered that PFNs do not handle real-valued responses in a very natural way?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your insightful comments! Detailed responses below. Most importantly, we would like to clarify our perspective on the mathematical foundation of our method and how it relates to PPDs.
> In Line 238, “FairPFN thus approximates a modified PPD,...,” pointing to the PPD in Equation 3. This is not mathematically true, or at least I fail to see how this could be true..
> I don’t think this question needs to necessarily be answered in this paper, but the paper certainly should avoid any chance of being misleading on [whether it approximates a modified PPD]
We believe FairPFN indeed approximates the statement from Eq. (3), that is
$$p(y_f|x_b,D_b) \propto \int_{\Phi} p(y_f | x_b, \phi)p(D_b | \phi)p(\phi)d\phi.$$
Here, $\phi$ is the latent, which in our case is the whole SCM, its weights, noise terms, and activations. $\phi$ is sampled per dataset during training. We would like to clarify that the fair outcomes $y_f$ come from the same data generating process $\phi$ as $y_b$ with a deterministic mapping applied to the original SCM, namely the removal of outgoing edges from the protected attribute, with everything else remaining constant. In this way we believe that both SCMs, the original and intervened upon one, are encapsulated in $\phi$ as the distribution shift is deterministic.
We would also be happy to introduce an additional term $\phi'$ and a deterministic mapping from $\phi$ in order to reflect the difference between the original and intervened upon SCM. If your criticism is about calling this a "modified PPD", we are open to recommendations on the exact wording.
We now outline why we believe that FairPFN approximates $p(y_f|x_b,D_b)$ (Eq. 3):
During training we sample examples for our training set and our test set using the likelihood $p(x_b,y_b,y_f|\phi)$, as detailed in the Algorithm at https://anonymous.4open.science/r/FairPFN-supplementary/figures/data_generation.pdf
We can use this distribution to sample our biased conditioning set $D_b=\{(x^i_b,y^i_b) \sim p(x_b,y_b|\phi)\}_{1 \leq i \leq n}$, as well as our hold-out examples with fair labels.
Our training loss is defined as:
$$E_{\phi \sim p(\phi); (x_b,y_f) \sim p(x_b,y_f|\phi); D_b \sim p(D_b|\phi)}[-log(q_\theta(y_f|x_b,D_b))],$$
where we first sample our latent $\phi$ and then conditioned on it both our test input-output pair and our conditioning set.
We can reformulate this loss as a KL-divergence with the "modified PPD" from above, similar to the derivation by Müller et al. (2022), as follows
\begin{align}
&= E_{x_b,y_f,D_b \sim p(x_b,y_f,D_b)}[-log(q_\theta(y_f|x_b,D_b))]\\\\
&= - \int\int\int p(x_b,y_f,D_b)log(q_\theta(y_f|x_b,D_b))dx_b dy_f dD_b\\\\
&= - \int\int p(x_b,D_b) \int p(y_f|x_b,D_b) log(q_\theta(y_f|x_b,D_b))dy_f dx_b dD_b\\\\
&= \int\int p(x_b,D_b) \left[ \text{KL}(p(y_f|x_b,D_b)|| q_\theta(y_f|x_b,D_b)) + \int H(p(y_f|x_b,D_b))\right] dx_b dD_b\\\\
&= E_{x_b,D_b \sim p(x_b,D_b)}[\text{KL}(p(y_f|x_b,D_b)|| q_\theta(y_f|x_b,D_b))] + C,
\end{align}
where $H$ is entropy, $KL$ is KL divergence and $C$ is a constant that does not depend on $\theta$.
So minimizing the train loss is the same as minimizing the average KL div. to $p(y_f|x_b,D_b)$, our model $q_\theta$ thus approximates it. Do you agree with this view? We propose to add this derivation to the paper.
> Figure 1 is visually appealing, but I’d still prefer to understand the data generating mechanism through formal equations which is currently missing in the early parts of Section 4.1
We propose to provide an Algorithm 2 in Section 4.1, namely https://anonymous.4open.science/r/FairPFN-supplementary/figures/data_generation.pdf, detailing how synthetic datasets are sampled from our MLP implementation of Structural Causal Models (SCMs).
Further, our synthetic datasets can be sampled and visualized with https://anonymous.4open.science/r/FairPFN/prior_data_example.ipynb
> Might be useful to define A in Algorithm 1
We will update line 173 to
*Sample $D_{bias} = (A, X_{bias}, Y_{bias})$ from $\Phi$*
and define the terms very explicitly in our new Algorithm 2
> Would you consider writing a few comments on extending the current methodology to regression settings, i.e., when Y is real-valued. Should we be considered that PFNs do not handle real-valued responses in a very natural way?
The recently released version of the TabPFN (Hollmann et al. 2025), whose predecessor we are building upon, integrates regression and one could follow their simple, but, according to their benchmarks, powerful setup of discretizing the continuous space. We will further add an Appendix, where we detail more future work possibilities like this.
**References**
1. Müller, S., et al. F. Transformers Can Do Bayesian Inference. ICLR, 2022. https://openreview.net/forum?id=KSugKcbNf9
2. Hollmann, N., et al. Accurate predictions on small data with a tabular foundation model. Nature, 637(8045):319–326, 2025. https://www.nature.com/articles/s41586-024-08328-6 | Summary: The paper introduces FairPFN, a tabular foundation model for causal fairness in machine learning. Pre-trained on synthetic causal fairness data, it mitigates the influence of protected attributes without requiring prior causal knowledge. Experiments show FairPFN effectively removes causal bias while maintaining strong predictive accuracy. Key contributions include a novel pre-training strategy, a synthetic causal data prior, and a fairness-aware foundation model.
## Update after rebuttal
Thanks for author's rebuttal, it solved some of my problems, I'll maintain my rating.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence, quantitative results showing improvements in causal fairness metrics (ATE) and predictive accuracy (1-AUC) across synthetic and real-world datasets. Meanwhile visual comparisons and ablation studies demonstrating the effectiveness of FairPFN in removing causal effects.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited for causal fairness in tabular data. Synthetic causal case studies enable controlled experiments with known ground truth, ensuring rigorous testing. Standard metrics like ATE and predictive error assess both fairness and accuracy. Comparisons with multiple baselines, from traditional ML models to causal fairness frameworks, further validate FairPFN’s effectiveness.
Theoretical Claims: The theoretical claims about FairPFN's ability to approximate the Posterior Predictive Distribution (PPD) and integrate over causal explanations are plausible and align with previous work on Prior-data Fitted Networks (PFNs). The connection to Bayesian Inference is well-established, and the modification to focus on causally fair targets is logically consistent with the goals of the paper. No specific proofs are provided, but the conceptual framework is sound and builds on existing theoretical foundations in causal ML and PFNs.
Experimental Designs Or Analyses: The author use a diverse set of synthetic causal case studies with varying complexity and known ground truth. Also evaluate on real-world datasets with established causal graphs. Additionally, they compare against multiple relevant baselines, including both traditional ML models and causal fairness methods.
Supplementary Material: Supplementary is submitted along with main text, show more results on Real-World Datasets and Ablation Analyse.
Relation To Broader Scientific Literature: FairPFN addresses limitations in existing causal fairness frameworks that require prior knowledge of causal models. It builds on concepts from counterfactual fairness and the Causal Fairness Analysis (CFA) framework, while relaxing assumptions about the need for user-specified causal information. Also FairPFN extends the PFN paradigm to causal fairness, leveraging the success of models like TabPFN in small tabular classification tasks. It demonstrates how PFNs can be adapted for complex causal tasks, opening new research avenues in causal ML.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The model is tested on real-world datasets with established causal graphs, demonstrating its practical utility. Meanwhile, The paper is well-structured and clearly explains the methodology, experiments, and results, the figures and tables effectively support the text.
However, the pre-training process requires significant computational resources (3 days on an RTX-2080 GPU), which might limit accessibility for some researchers. Meanwhile, the paper notes that FairPFN increases the correlation of "Sex" with predictions in one analysis, suggesting potential issues with intersectional fairness that need further investigation.
Other Comments Or Suggestions: The authors might consider releasing the code and pre-trained models to facilitate reproducibility and further research in this area.
Some figures (e.g., Figure 1) are quite complex and might benefit from additional explanation or simplification for better understanding.
Questions For Authors: See "Other Comments Or Suggestions" and "Other Strengths And Weaknesses".
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank you for your constructive feedback! In response, we have: (1) clarified that FairPFN requires no retraining for new applications (2) added analysis on intersectional fairness through a synthetic case study; (3) clarified the availability of inference and pre-training data generation code; and (4) simplified Figure 1 while adding a supporting pseudocode algorithm detailing the pre-training data generation process. We welcome any additional guidance should you have further suggestions to enhance our work.
> However, the pre-training process requires significant computational resources (3 days on an RTX-2080 GPU), which might limit accessibility for some researchers.
We apologize for the confusion! In fact, FairPFN is a pre-trained foundation model that requires no retraining for researchers wishing to reproduce our results or apply it to new fairness problems. We propose adding a sentence at the end of Section 4.1 (Real World Inference) stating: "As a pre-trained foundation model, FairPFN can be directly applied to new fairness problems through a single forward pass of data through the transformer, eliminating any need for retraining." Our inference and synthetic data code is available at https://anonymous.4open.science/r/FairPFN
> FairPFN increases the correlation of "Sex" with predictions in one analysis, suggesting potential issues with intersectional fairness that need further investigation.
We appreciate your concern regarding the increased correlation of “Sex” in Figure 8. The reason for this is that FairPFN is currently pre-trained for binary classification tasks with single, binary protected attributes, so it is simply not tasked to mitigate the effects of secondary protected attributes.
In order to further investigate this result, we have created a synthetic case study "Multiple Protected Attributes" demonstrating that while FairPFN removes the effect of the first protected attribute, the causal effect of secondary protected attributes resembles that of a non-fairness aware classifier. A visualization of our new "Multiple Protected Attributes" case study and an illustration of this result is linked below:
https://anonymous.4open.science/r/FairPFN-supplementary/figures/multiple.pdf
We propose the following changes:
1. Highlight this limitation in the results section
2. Include the causal graph visualization of this new case study in the appendix, as well as the above figure illustrating how FairPFN reverts to a non-fairness aware classifier regarding secondary protected attributes
3. Detail methodological changes in an appendix section titled “Future Extensions” for addressing the challenge of intersectionality.
- This would include sampling multiple protected attributes as exogenous causes in the synthetic pre-training data generation and informing the transformer via an encoding mask which of these multiple variables is protected. Then in order to generate the fair outcomes, dropout would need to be performed on the outgoing edges of all simulated protected attributes.
> The authors might consider releasing the code and pre-trained models to facilitate reproducibility and further research in this area.
We appreciate your emphasis on reproducibility! We have actually already provided these resources in our initial submission, specifically an inference pipeline to access our pre-trained model and the code to generate our synthetic pre-training data. Our Anonymous GitHub Repository link is included at the end of Section 1: https://anonymous.4open.science/r/FairPFN
> Some figures (e.g., Figure 1) are quite complex and might benefit from additional explanation or simplification for better understanding.
Thank you for this feedback regarding the clarity of our figures! We've modified Figure 1 to focus exclusively on visualizing the SCM from which our synthetic pre-training data is sampled. The updated and simplified version of Figure 1 is linked below:
https://anonymous.4open.science/r/FairPFN-supplementary/figures/flowchart.pdf
To compensate for this simplification, we've developed a pseudocode algorithm explaining our synthetic data generation process, which we propose including in the main text:
https://anonymous.4open.science/r/FairPFN-supplementary/figures/data_generation.pdf | null | null | null | null | null | null | null | null |
Adaptive Elicitation of Latent Information Using Natural Language | Accept (poster) | Summary: This paper studies the problem of adaptively selecting queries to reduce uncertainty on a latent entity. They propose to fine-tune an LLM for meta-learning the task of question answering with a latent entity (e.g. the 20 questions game with different hidden entities). This approach allows for measuring uncertainty of the latent entity as the LLM’s predictive uncertainty in predicting answers to future questions. This approach is shown to be effective in producing calibrated uncertainty estimates which allows for better uncertainty-guided question selection.
Claims And Evidence: The claim that the adaptive query selection strategy is better than a random selection strategy is supported by results in Figure 3 as well as Figure 5.
The claim that the meta-learning method effectively learns to model uncertainty is supported by Figure 4 where the ECE is clearly very low. It’s unclear though how exactly the confidence is calculated in this case. It is stated in Section 2.3 that the “variability in these simulated futures” is treated as a measure of uncertainty, but it is not mentioned anywhere what measure of variability is used and how uncertainty is converted to confidence. I believe some notion of entropy is used, but more details about how the entropy is computed would help clarify this point.
The claim that their method works for general latent information elicitation settings is supported by their experiments across three diverse applications.
Methods And Evaluation Criteria: The proposed methods and evaluation metrics make sense.
Theoretical Claims: I looked over the proof of Theorem A.4 which bounds the performance gap of the greedy approach and an optimal planning method. The proof is relatively straightforward, although I did not look into all of the details.
Experimental Designs Or Analyses: The experiment in Figure 3 and the analysis for how well the proposed method can perform adaptive elicitation is valid. The examination of calibration to determine effectiveness of their measure of uncertainty is also a valid approach. Their further analysis on how well their method performs on the most challenging questions is very valuable and provides useful insight into where their approach works best. Finally, the ablation on the underlying model in their framework is a valid way to show that the meta-training method is vital to their method’s success.
Supplementary Material: I looked at the provided code, but I did not try to run anything. The code is relatively straightforward.
Relation To Broader Scientific Literature: The method presented in this paper for fine-tuning an LLM for the task of meta-learning next action prediction has been studied in prior work in reinforcement learning, but this paper uses the idea for LLM uncertainty quantification. The problem setting in this paper has also been studied in prior work such as Uncertainty of Thoughts which similarly tries to determine the best questions to ask to elicit the most information, but this paper shows that fine-tuning an LLM for meta-learning is actually useful for this setting.
Essential References Not Discussed: The Uncertainty of Thoughts paper is mentioned in the related work, but it is unclear why it is not directly compared against in any of the experiments.
Other Strengths And Weaknesses: Other strengths:
- The problem considered is well motivated as highly significant.
- The experiments and results are presented in a clear manner.
Other weaknesses:
- The experiments seem to be lacking on baselines. It is well supported that the proposed LLM fine-tuning method is useful for uncertainty quantification, and the selection method is useful for the question selection task, but it is unclear how this approach fairs compared to the other methods which address these problems.
- A discussion on the computational expense of the method would be helpful since it seems like if there are thousands of candidate questions, then even the greedy method could be very slow.
Other Comments Or Suggestions: - Line 36 left: The sentence is either missing punctuation or generally does not make sense.
- Line 402 right: “setti” -> “setting”
- Line 652 is missing a closing parenthesis.
Questions For Authors: 1. Why are existing LLM uncertainty quantification methods not comparable baselines?
2. How is confidence, as shown in Figure 4, calculated?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful feedback. We are encouraged that the reviewer found the problem setting to be well motivated, and our experiments and results to be presented clearly. Please see our responses to the particular concerns raised.
**[Q1: Calculating confidence]**
Confidence is calculated in the typical way as the softmax response for the predicted answer. In response to this concern, we will clarify this calculation in our revised paper, as well as clearly specify when we use related notions of entropy or variability.
**[Q2: Computational Expense]**
In terms of analysis, we provide a brief overview for the reviewer. Let $n$ be the number of possible questions to ask and $m$ be the number of possible answers to each question. The computational complexity of the EIG method at each time step $t$ is $O(m\cdot n)$. This is because we need to calculate the conditional entropy of each answer for each feasible question in the set. To calculate the complexity of MCTS, let $z$ be the number of trajectories we simulate and $d$ be the depth of simulation. Then MCTS is $O(z\cdot n\cdot m\cdot d) $, as we have $z$ simulations up to depth $d$ for each question, and at each depth we need to perform an EIG calculation. We will provide a detailed computational complexity analysis in our final draft.
To compare the complexity of our method to other approaches, we note that other common methods like embedding-based methods will require a linear scan of the possible questions, yielding a complexity similar to our greedy method. Furthermore, methods that directly use LLMs to generate questions and responses require forward passes in the number of tokens in each question and responses, while our method only requires one forward pass in this respect. Applying MCTS on top of using LLMs to generate responses (as in UoT) will incur complexity $O(z\cdot n\cdot m\cdot d)$ multiplied by complexity of generating responses.
We acknowledge the complexity of the MCTS approach can be a bottleneck in scenarios with an extremely large question space or stringent real-time constraints. However, we also want to highlight that there are also many important applications (including some that we consider) where this should not be prohibitive.
For example, in educational settings the extra computation can be performed during the student’s response time, as the policy is updated while the student works on the current question—thus introducing a tolerable lag. Conversely, in real-time applications such as live diagnostics or interactive dialogue systems, the computational overhead of MCTS might be prohibitive, and more efficient strategies (e.g., greedy selection) may be preferred. We view our work as introducing a new conceptual perspective on uncertainty-guided questioning, and we hope that future work will build on our approach to develop more computationally efficient solutions.
**[Q3: Comparison to other methods]**
While there exists a growing body of literature on UQ in LLMs, much of this work is focused on different settings and types of uncertainty than we address here. In particular, much of this growing body of work is focused on short-form generation and QA style tasks [e.g., 1-5]. These works are primarily focused on improving the reliability in off-the-shelf LLMs, by quantifying existing uncertainty in some answer to a response to some question. This is a fundamentally different task than quantifying how observing the answer to some question will affect your uncertainty in other question/answers pairs, and using these estimates to make adaptive decisions to reduce uncertainty.
With respect to Uncertainty of Thoughts (UoT), we did not compare as a baseline because UoT uses the LLM to *generate* potential questions, while our setting involves choosing potential questions from a given question bank. If we adapt UoT to our setting, it would be equivalent to applying the EIG measure to the base LLM. We compare this in the ablation in Figure 6, and show that adapted UoT (EIG + base LLM) performs worse than random selection. We hypothesize that this worse performance is due to the specialized nature of datasets like OpinionQA and EEDI which may not be in the training set of the base LLM, pointing to the necessity of our meta-learning procedure.
- [1] Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation https://arxiv.org/abs/2302.09664
- [2] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models https://arxiv.org/abs/2307.01379
- [3] Language Models (Mostly) Know What They Know https://arxiv.org/abs/2207.05221
- [4] Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback https://arxiv.org/abs/2305.14975
- [5] INSIDE: LLMs' Internal States Retain the Power of Hallucination Detection https://arxiv.org/abs/2402.03744 | Summary: Authors consider a meta-learning QA scenario in which each dataset contains an unobservable latent variable (e.g., medical notes with an unseen clinical diagnosis). They propose an iterative and adaptive framework designed to reduce uncertainty around these latent variables.
Claims And Evidence: **Claim 1.** "We introduce an adaptive elicitation framework that uses natural language to actively reduce uncertainty on the latent entity by simulating counterfactual responses"
Yes, the framework is introduced in Sec. 2. I am not sure that the responses are actually counterfactual in this framework. See “Questions For Authors”.
**Claim 2.** "Our approach enables the model to identify epistemic uncertainty and facilitate sophisticated information gathering strategies as it updates its understanding of the latent ..."
Even though the method provides some form of uncertainty over a latent entity, I don’t see clear justification that predictive perplexity estimates epistemic uncertainty over a latent entity.
**Claim 3**. Experimentally, “we illustrate the versatility and significant potential of our framework to enable more efficient and targeted information elicitation in critical domains and applications.”
This claim is well-supported by the experiments.
Methods And Evaluation Criteria: The methods and evaluation criteria are logical. However, incorporating a realistic use case would make the argument more compelling.
Theoretical Claims: 1. The assumption of exchangeability is too strong. See “Other Comments and Suggestions” for further details.
2. The proof of A4 was difficult to follow. Specifically:
- Steps (5-7) require more detailed explanations.
- The definition and role of fact A2 are unclear.
- The bound $\log(∣Y∣(T−t))log(∣Y∣(T−t))$ (LL680) is not well-explained.
- It is unclear why assumption A2 is referred to as a fact.
As a result, I was only able to fully understand the bound of the third term.
Experimental Designs Or Analyses: I think the datasets and baselines are well-selected. The only issue I see is the lack of computational efficiency analysis.
Supplementary Material: Supplementary material was hard to follow, I briefly reviewed everything, and took a close look at A4.
Relation To Broader Scientific Literature: I think the paper discusses well the relation to broader scientific literature. The work nicely expands works on LLM planning and uncertainty quantification to a an important application of reducing uncertainty over a latent entity in a dialogue.
Essential References Not Discussed: I think more relevant works should be mentioned in related work section, for instance:
### Planning with LLMs
Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents https://arxiv.org/pdf/2201.07207
ReAct: Synergizing Reasoning and Acting in Language Models https://arxiv.org/pdf/2210.03629
### UQ for LLMs
Kernel Language Entropy: Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities https://arxiv.org/abs/2405.20003
Reducing Conversational Agents’ Overconfidence Through Linguistic Calibration https://aclanthology.org/2022.tacl-1.50/
Other Strengths And Weaknesses: ### Strength:
- Original work and important topic,
- The method is valid and supported by the experimental results
### Weakness:
- Some assumptions (exchangeability) are not well discussed, and are not realistic
- The method in my opinion could be motivated better, e.g.,
* Why optimizing the joint log likelihood/marginal likelihood is optimal?
* Why is simulation needed to quantify uncertainty?
Other Comments Or Suggestions: Section 2: the method explanation in the beginning was unclear,
The exchangeability assumption is overly strong and known to be incorrect based on findings from cognitive psychology, such as order effects, framing, and anchoring effects. It is important to discuss and emphasize that this assumption is not realistic.
It would be useful to provide examples of questions and answers in the paper, and show how the uncertainty changes in practice.
Questions For Authors: I'm puzzled by the term "counterfactual response" in the paper. Could you clarify its meaning? I don't see how the responses generated by your framework qualify as counterfactual. Are you using "counterfactual" in a different sense than its common usage in causal inference literature?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful and thorough review, and their recognition that our claims are well-supported by experiments. We response below to the concerns.
**[Q1: Use of "counterfactual response"]**
We apologize for the confusing use of the term "counterfactual response" in the paper. "Counterfactual" has no reference to its usage in the causal inference literature and a better term to use would be "simulated response". We will replace this in the final draft.
**[Q2: Predictive perplexity / Joint likelihood]**
Our formulation of uncertainty quantification derives from a large body of work on inference with missing data (e.g. [1-2]). Under this view, knowing $U$ is defined as being able to predict any answer $Y$, with errors only due to random, aleatoric variation. Observing the infinite sequence $(X_{1:\infty}, Y_{1:\infty})$ reduces all epistemic uncertainty, leaving only aleatoric uncertainty: $H(Y \mid U) = H(Y \mid X_{1:\infty}, Y_{1:\infty})$. Then, epistemic uncertainty in $U$ naturally corresponds to uncertainty in the *missing data* $Y_{t+1:\infty}$ given history $Y_{1:t}$. The joint likelihood $P(Y_{t+1:\infty} \mid X_{1:t}, Y_{1:t})$ exactly quantifies uncertainty about the missing data $Y_{t+1:\infty}$ which shows it is the correct objective. Under the special condition of exchangeability, this is precise: there exists $\theta(X_{1:\infty}, Y_{1:\infty})$, such that $H(Y \mid U) = H(Y \mid \theta(X_{1:\infty}, Y_{1:\infty}))$.
We will include a detailed description in the paper.
**[Q3: Simulation to quantify epistemic/aleatoric?]**
Simulating future trajectories allows us to quantify which actions will reduce the most epistemic uncertainty. To quantify uncertainty about a target $Z$ we calculate,
$$\text{Epistemic Uncertainty}(Z | X_{1:t}, Y_{1:t}) = (\text{Total}) H(Z \mid X_{1:t}, Y_{1:t}) - (\text{Aleatoric}) \mathbb{E}[H(Z \mid X_{1:\infty}. Y_{1:\infty})].$$ In practice, we approximate this quantity using simulation to calculate existing epistemic/aleatoric uncertainty. The EIG then quantifies the reduction in epistemic uncertainty, which we also calculate by simulation.
**[Q4: Exchangability assumption]**
We agree with the reviewer that the exchangeability assumption may be too strong in practice. We will outline these fundamental limitations in our camera-ready version. We also emphasize that exchangeability mainly serves as inspiration for our practical UQ framework, alongside our strong empirical results.
**[Q5: Computational analysis]**
Define $n$ as the number of questions to ask and $m$ be the number of possible answers. The computational complexity of the EIG method at each time step $t$ is $O(m\cdot n)$ because we need to calculate the conditional entropy of each answer for each feasible question in the set. The complexity of MCTS is $O(z\cdot n\cdot m\cdot d)$, where $z$ is the number of simulations and $d$ be the depth of simulation, since there are $z\cdot d$ EIG calculations. We will include a detailed discussion of complexity analysis in the camera-ready version.
**[Q6: Proof of A4]**
Steps (5-7) in the proof outline a telescoping expansion of the KL divergence term. The telescoping sum consists of three terms. (5) is the difference between the ground truth distribution $q$ and $p$ conditioned on the optimal information $X_{p_{\theta}}^{\ast}$. (6) is the difference of $p(Y_{t:T} | X_{p_{\theta}}^{*})$ when $Y_{t:T}$ is simulated from $q$ versus $p$. Finally (7) measures the difference of conditioning on the optimal information $X_{p_{\theta}}^{\ast}$ and $X_{greedy}$.
Regarding Fact A.2, we apologize as we made a typo. Fact A.2 refers to the fact that for any $S \in \mathcal{S}, 0\leq H(S) \leq \log |S|$ on lines 671-674. The term $\log (|Y| (T-t))$ comes from bounding $H(Y_{t:T} | X_{p_{\theta}}^{*})$ on line 685. Because $Y_{t:T}$ is exchangeable and the ordering does not matter, then the cardinality of $Y_{t:T}$ is $|Y|(t-T)$.
**[Q7: Realistic use cases]**
Thank you for this suggestion, we agree that incorporating more realistic use cases could strengthen the evaluation, especially in high-impact areas like medical diagnosis. While 20 Questions is synthetic, we highlight that EEDI and OpinionQA are real-world datasets in personalized tutoring and opinion polling, important domains where a technique like ours might make significant impact.
**[Q8: Related work/qualitative examples]**
We thank the reviewer for pointing out missing related work; we will include these and additional references in our final version. We will also include examples of questions and answers in the paper, and show how uncertainty changes in practice.
[1] Edwin Fong, Chris Holmes, and Stephen G Walker. Martingale posterior distributions. Journal of the Royal Statistical Society, Series B, 2023.
[2] Naimeng Ye and Hongseok Namkoong. Exchangeable sequence models quantify uncertainty over latent concepts. arXiv preprint arXiv:2408.03307, 2024. | Summary: This paper introduces an adaptive elicitation framework for actively reducing uncertainty about latent entities, using adaptive query selection and simulated counterfactual responses. It leverages a meta-trained LLM to quantify uncertainty about future or unobserved answers via simulation, then iteratively selects queries that maximize expected information gain. Experiments on dynamic opinion polling, adaptive student assessment, and a structured 20 Questions game demonstrate that the method significantly improves accuracy and uncertainty reduction.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence, including empirical results across three different tasks (Opinion QA, EEDI tutoring, and 20 questions), and the theoretical analysis on the expected information gain.
Methods And Evaluation Criteria: The use of LLMs meta-trained with historical data containing latent entities is reasonable and well-motivated. Predictive perplexity as a measure of uncertainty makes sense, as it provides a practical and scalable way to approximate epistemic uncertainty without requiring explicit latent variable modeling. For the evaluation, the selected datasets cover diverse domains, and the authors select common metrics for uncertainty quantification, such as ECE and Brier score.
Theoretical Claims: I did not check the proofs in the Appendix due to time constraint.
Experimental Designs Or Analyses: * Validity of the experimental design is checked.
The baselines (in-context tuning, base LLM with random question selection) use the same backbone model or meta-training setting as in the main experiment, which is a fair comparison. The split is done with latent entities instead of just QA pairs, which avoids data leakage. A consistent improvement in performance and a reduction in uncertainty are clear across all three datasets and evaluation metrics, showing the effectiveness of the adaptive elicitation framework. The ablation studies are extensive, including comparisons between adaptive (EIG, MCTS) and random selection for different subsets of difficulty, the comparisons of performance gains from planning using different models, and the comparisons of different number of target questions.
Supplementary Material: I checked the info gain calculation and MCTS implementation in `info_gain.py` and `utils.py`.
Relation To Broader Scientific Literature: The paper builds on previous RL research demonstrating that pretrained models can be adapted for decision-making tasks with offline data, and meta-learned sequence models approximate bandit algorithms, but instead focuses on the natural language setting and uncertainty quantification in decision making. It is complimentary and aligns with the works on uncertainty quantification over natural language. It also fit into the category of planning and information gathering with LLMs,. However, unlike prior methods that rely on off-the-shelf models for uncertainty estimation, it uses a meta-learning procedure.
Essential References Not Discussed: To my knowledge, there are no essential related works that are missed.
Other Strengths And Weaknesses: Strengths:
* Uncertainty-aware decision-making in LLMs is an important topic, and the specific focus of eliciting latent entities with meta-training and adaptive question selection appears a novel contribution.
* The selected datasets are from various domains, and effectively demonstrate the framework’s potential for latent entity elicitation.
* Code is available
* The paper is well-presented and easy to follow
Weaknesses:
* Lacking analysis and comparison on the overall complexity of the method (especially for variants with MCTS)
Other Comments Or Suggestions: * It would be helpful if the authors could outline "the rich body of literature on topics related to latent entity modeling" in the appendix.
Questions For Authors: * Is the performance sensitive to the choice of the embedding model? Have you tried such ablation with different embedding models?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their effort in reviewing our paper, and their thoughtful feedback. We are pleased that the reviewer appreciated the importance of the topic as well as the novelty of our approach. Below please see our responses to the particular concerns raised.
**[Q1: Sensitivity to embedding model]**
As per the reviewer's suggestion, we ran additional experiments to ablate the importance of the embedding model. We tested 5 embedding models: stella_en_1.5B_v5, Qwen2-7B-instruct, e5-mistral-7b-instruct , OpenAI text-embedding-3-large, and AlibabaNLP/gte-large-en-v1.5 (which we used in our original paper), and measured their accuracy after conditioning on 3 in-context questions for the EEDI and OpinionQA dataset. We set the number of targets to be 5 and number of possible questions to ask as 20, which mirrors the setup in our paper. Here are the experiment results, where numbers reported are for each entry is an average over 10,000 different runs:
| Embedding Model | EEDI Accuracy | OpinionQA Accuracy |
|----------------------------------------|---------------|--------------------|
| AlibabaNLP/gte-large-en-v1.5 | 0.621 | 0.505 |
| stella_en_1.5B_v5 | 0.618 | 0.510 |
| gte-Qwen2-7B-instruct | 0.625 | 0.501 |
| e5-mistral-7b-instruct | 0.626 | 0.506 |
| OpenAI text-embedding-3-large | 0.631 | 0.515 |
We find that while there is a bit of variation in the embedding model performance. For EEDI, we notice that the larger 7B models and OpenAI text-embedding-3-large have slightly higher performance. For OpinionQA, the performance is more varied. However, none of the embedding models make the In-Context Tuning + Embedding baseline outperform our method. Our method achieves 0.678 accuracy on EEDI and 0.597 accuracy on OpinionQA, as reported in Figure 3.
**[Q2: Computational Complexity]**
In terms of analysis, let $n$ be the number of possible questions to ask and $m$ be the number of possible answers to each question. The computational complexity of the EIG method at each time step $t$ is $O(m\cdot n)$. This is because we need to calculate the conditional entropy of each answer for each feasible question in the set. To calculate the complexity of MCTS, let $z$ be the number of trajectories we simulate and $d$ be the depth of simulation. Then MCTS is $O(z\cdot n\cdot m\cdot d) $, as we have $z$ simulations up to depth $d$ for each question, and at each depth we need to perform an EIG calculation. We will provide a detailed computational complexity analysis in our final draft.
To compare the complexity of our method to other approaches, we note that other common methods like embedding-based methods will require a linear scan of the possible questions, yielding a complexity similar to our greedy method. Furthermore, methods that directly use LLMs to generate questions and responses require forward passes in the number of tokens in each question and responses, while our method only requires one forward pass in this respect. Applying techniques like MCTS on top of using LLMs to generate responses will incur complexity $O(z\cdot n\cdot m\cdot d)$ multipled by complexity of generating responses.
We acknowledge that the complexity of the MCTS approach can be a bottleneck in scenarios with an extremely large question space or stringent real-time constraints. However, we also want to highlight that there are also many important applications (including some that we consider) where this should not be prohibitive.
For example, in educational settings the extra computation can be performed during the student’s response time, as the policy is updated while the student works on the current question—thus introducing a tolerable lag. Conversely, in real-time applications such as live diagnostics or interactive dialogue systems, the computational overhead of MCTS might be prohibitive, and more efficient strategies (e.g., greedy selection) may be preferred. We view our work as introducing a new conceptual perspective on uncertainty-guided questioning, and we hope that future work will build on our approach to develop more computationally efficient solutions (e.g., via distillation).
In response to the reviewer's concern, we will include a more detailed discussion of these trade-offs and complexity considerations in the camera-ready version.
**[Q3: Related work on latent entity modeling]**
We agree with the reviewer that a review surrounding latent entity modeling would be of major benefit. We will make sure to include this discussion in the camera-ready version of the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response and additional experiments on embedding models, which addressed my concerns. I have raised my score to 4. | Summary: This paper introduces a framework for adaptive elicitation of latent information using LLMs to optimize question selection based on predictive uncertainty. Instead of explicitly modeling latent variables (e.g., knowledge levels or user preferences), the method quantifies uncertainty via LLM perplexity and simulated future responses to guide information gain-based question selection. The approach, tested on opinion polling, student assessment, and the Twenty Questions game, outperforms baselines by efficiently reducing uncertainty. The key contributions include uncertainty-driven question selection (via Greedy EIG and MCTS) and a general-purpose adaptive questioning framework applicable to diverse domains like education and healthcare.
Claims And Evidence: The claim can be supported by the paper
Methods And Evaluation Criteria: Since the framework does not explicitly define or model the latent variable U, it remains unclear what the model has actually learned about the underlying information structure. This lack of interpretability makes it difficult to trust the system’s reasoning process, especially in high-stakes applications like education assessment or medical diagnosis. A more structured approach, such as probabilistic graphical models (e.g., Variational Autoencoders, Hidden Markov Models), could improve transparency.
The paper primarily uses PPL as the main measure of uncertainty, but PPL only reflects how well a language model predicts text sequences, not the true epistemic uncertainty about the latent variable. In scenarios where the dataset is imbalanced, the model may simply reinforce dominant patterns—for example, assuming that all patients have high blood pressure because it is the most common label in the data. This can lead to overconfident yet incorrect predictions, which undermines the reliability of uncertainty-based question selection.
Theoretical Claims: Correct
Experimental Designs Or Analyses: The method relies on high-quality training data, but it is unclear where such data can be sourced across different domains like education and healthcare. Real-world datasets are often noisy, biased, or incomplete, which can affect the reliability of the model’s predictions. Additionally, LLMs inherently learn statistical patterns at the population level, making them effective for broad generalizations but less suited for capturing individual-specific nuances due to the lack of explicit memory or personalization.
The framework is evaluated only on Q&A-style tasks such as OpinionQA, student assessment, and the Twenty Questions game, which focus primarily on eliciting missing information rather than complex decision-making. This raises concerns about how well the method generalizes to more challenging tasks, such as medical diagnosis, scientific research, or multi-step reasoning, where information acquisition needs to be integrated with reasoning and planning.
The approach requires multiple forward passes through the LLM to compute uncertainty via response sampling, which is computationally expensive. This makes it impractical for real-time applications, such as interactive tutoring or live medical diagnostics, where response speed is crucial. Reducing reliance on repeated sampling, optimizing inference efficiency, or leveraging smaller models for uncertainty estimation could help mitigate this issue.
Supplementary Material: Reviewed Appendix A-D
Relation To Broader Scientific Literature: The use of perplexity (PPL) to quantify uncertainty relates to prior work on uncertainty-aware machine learning, particularly in Bayesian deep learning and active learning. Traditional methods, such as Monte Carlo Dropout (Gal & Ghahramani, 2016) and Deep Ensembles (Lakshminarayanan et al., 2017), explicitly model uncertainty by estimating the variance of predictions. In contrast, this paper proposes a novel use of LLM perplexity as a proxy for predictive uncertainty, which aligns with prior research on entropy-based uncertainty estimation in NLP (e.g., Maaløe et al., 2019). However, unlike standard Bayesian methods, this approach does not explicitly separate aleatoric and epistemic uncertainty, which may limit its robustness.
The idea of selecting optimal questions to reduce uncertainty aligns with Bayesian Experimental Design (Chaloner & Verdinelli, 1995) and active learning methods (Settles, 2010), which aim to iteratively collect informative data. Prior work in educational assessment (Reich, 2012) and personalized learning (Piech et al., 2015) has explored similar strategies for adaptively selecting test questions to estimate student knowledge. This paper extends these concepts by using large language models to dynamically generate and select questions based on expected information gain (EIG). The approach is conceptually similar to Pólya tree priors (Hanson, 2006) in Bayesian adaptive testing but replaces probabilistic models with LLM-driven heuristics.
Recent studies have explored using LLMs as agents for reasoning and interactive decision-making (e.g., Brown et al., 2020 (GPT-3); Wei et al., 2022 (Chain-of-Thought Reasoning)). This paper contributes to this line of research by demonstrating how LLMs can be used for adaptive information gathering, a capability related to human-like cognitive strategies for active inference (Friston, 2010) and rational metareasoning (Griffiths et al., 2019). Unlike previous work on prompting-based reasoning, this study frames LLMs as active participants in information elicitation, rather than passive responders.
The use of Monte Carlo Tree Search (MCTS) for multi-step question selection connects this work to reinforcement learning (Silver et al., 2016, AlphaGo) and Bayesian optimization for decision-making (Snoek et al., 2012). While MCTS has been widely applied to game playing and planning problems, its application to adaptive questioning with LLMs is relatively novel. However, this paper does not fully integrate reinforcement learning techniques, which distinguishes it from related work in LLM self-improvement (Schick et al., 2023).
Essential References Not Discussed: None
Other Strengths And Weaknesses: See Methods And Evaluation Criteria and Experimental Designs Or Analyses.
Other Comments Or Suggestions: See Methods And Evaluation Criteria and Experimental Designs Or Analyses.
Questions For Authors: See Methods And Evaluation Criteria and Experimental Designs Or Analyses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and consideration taken in reviewing our paper, and for recognizing our contribution to the application of uncertainty-driven question selection in important domains like education and healthcare. Below, we respond to your particular concerns.
**[Q1: Modeling the latent, relationship to perplexity, and separation of epistemic + aleatoric uncertainty]**
Most latent entities have no obvious physical meaning (e.g., academic proficiency or political opinions), and can only be observed through noisy observations (e.g., student answers where they guess if they're unsure).
Our framework avoids the need to perform such direct modeling, allowing us to build a framework that is able to be applied on existing LLMs to use their internet-scale pretrained knowledge to make robust, adaptive decisions.
Our formulation of uncertainty quantification derives from a large body of work on inference with missing data (e.g. [1-2]), and we will include a description of this relationship and our characterization of
epistemic and aleatoric uncertainty in the paper. Under this view, knowing $U$ is defined as being able to predict any answer $Y$, with errors only due to random, aleatoric variation. Observing the infinite sequence $(X_{1:\infty}, Y_{1:\infty})$ reduces all epistemic uncertainty, leaving only aleatoric uncertainty: $H(Y \mid U) = H(Y \mid X_{1:\infty}, Y_{1:\infty})$. Then, epistemic uncertainty in $U$ naturally corresponds to uncertainty in the *missing data* $Y_{t+1:\infty}$ given history $Y_{1:t}$. The joint likelihood $P(Y_{t+1:\infty} \mid X_{1:t}, Y_{1:t})$ exactly quantifies uncertainty about the missing data $Y_{t+1:\infty}$ which shows it is the correct objective. Under the special condition of exchangeability, this is precise: there exists a function $\theta(X_{1:\infty}, Y_{1:\infty})$, such that $H(Y \mid U) = H(Y \mid \theta(X_{1:\infty}, Y_{1:\infty}))$ exactly.
Simulating future trajectories allows us to quantify which actions will reduce the most epistemic uncertainty. To quantify uncertainty about a target $Z$, we calculate
$$\text{Epistemic Uncertainty}(X_{1:t}, Y_{1:t}) = (\text{Total}) H(Z \mid X_{1:t}, Y_{1:t}) - (\text{Aleatoric}) \mathbb{E}[H(Z \mid X_{1:\infty}. Y_{1:\infty})]$$ If we could infinitely simulate $Y \sim p_{\theta}$, we can exactly calculate epistemic/aleatoric uncertainty. In practice, we will simulate finite time steps to approximate this quantity.
In response to this concern, we will work to clarify these discussions in our revised paper.
**[Q2: Data requirements]**
We appreciate the reviewer's feedback about issues such as dataset imbalance or insufficient data. We will include a grounded discussion of the fundamental limitations of our approach---also shared by other approaches---in the camera ready version. While LLMs may learn average-case behavior, our meta-learning approach is designed to personalize to individuals by understanding what data to gather about each specific individual. A key advantage of our method is we can include any/all user information for the LLM to condition on for sufficient personalization.
**[Q3: Complex Reasoning]**
Our work takes on a different approach to reasoning by integrating explicit uncertainty quantification and estimation with the natural language capabilities of LLMs. Our method can be complementary to reasoning models such as OpenAI's O1 and DeepSeek-R1 by explicitly integrating reasoning with our uncertainty estimation capabilities. Such ideas are beyond the scope of this work, as we aim to first demonstrate the validity of our UQ framework.
**[Q4: Computational Expense]**
While MCTS can be a bottleneck in settings with large question spaces or strict real-time constraints, it remains practical for many applications including those we study. In education, for example, computation can occur during a student’s response time, causing only minor lag. In contrast, real-time tasks like diagnostics or dialogue may require faster methods such as greedy selection. Our work offers a new perspective on uncertainty-guided questioning, and we hope it inspires more efficient approaches (e.g., via distillation).
We also note that methods like embedding-based methods will require a linear scan of the possible questions, yielding a complexity similar to our greedy method. Furthermore, methods that directly use LLMs to generate questions and responses require forward passes in the number of tokens in each question and responses, while our method only requires one forward pass in this respect. We will include a detailed discussion of complexity/tradeoffs in the camera-ready version.
[1] Edwin Fong, Chris Holmes, and Stephen G Walker. Martingale posterior distributions. Journal of the Royal Statistical Society, Series B, 2023.
[2] Naimeng Ye and Hongseok Namkoong. Exchangeable sequence models quantify uncertainty over latent concepts. arXiv preprint arXiv:2408.03307, 2024. | null | null | null | null | null | null |
Contrastive Learning with Simplicial Convolutional Networks for Short-Text Classification | Accept (poster) | Summary: This work proposes C-SCN, which combines contrastive learning with simplicial complexes in convolutional networks to capture higher-order interactions and improve short-text classification performance. Experimental results demonstrate its superiority over existing methods.
Claims And Evidence: The work identifies three issues with current methods:
1. The augmentation step may fail to generate positive and negative samples that are semantically similar and dissimilar to the anchor, respectively.
2. External auxiliary information may introduce noise to the sparse text data.
3. Limited ability to capture higher-order information, such as group-wise interactions.
However, it remains unclear how the proposed methods specifically address these issues.
SCN is used to capture higher-order information, but there are many alternatives, such as self-attention, jump self-attention, and etc. What are the reasons for adopting simplicial complexes over these other options?
The paper puts effort into introducing the message-passing mechanism. How is it integrated into convolutional networks, and what is the structure of the SCN? A figure may be a helpful illustration.
Methods And Evaluation Criteria: The proposed method is not well justified for the problem. The chosen benchmark datasets are appropriate for the task.
Theoretical Claims: Theoretical claims seem fine.
Experimental Designs Or Analyses: The experiment designs are good.
Supplementary Material: I reviewed the supplements.
Relation To Broader Scientific Literature: The idea to incorporate the high-order information has been highlighted in the existing works.
Essential References Not Discussed: What is the TDL mentioned in the introduction section? Please give a reference.
Other Strengths And Weaknesses: The topic is important in the field, and the idea of incorporating higher-order information into the model design is inspiring.
Other Comments Or Suggestions: No other comments.
Questions For Authors: Can the simplicial complexes be used in Transformer?
In the era of LLMs, models like ChatGPT, DeepSeek, and LLaMA have already achieved strong performance in short-text classification. For closed-source LLMs, we can use APIs and Apps, while for open-source LLMs, we can fine-tune them on local machines. Why not use these LLMs for short-text classification?
In the loss function in Eq. 12, both the classification loss and contrastive loss use features generated from the BERT model. It is difficult to justify whether the proposed SCN encoder contributes, and to what degree, to the performance.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **(W1) The work identifies three issues with current methods. However, it remains unclear how the proposed methods specifically address these issues.**
Due to the word limit constraints, we refer to the response to Reviewer jGxX (W3) for similar concerns.
**(W2) SCN is used to capture higher-order information, but there are many alternatives, such as self-attention, jump self-attention, and etc. What are the reasons for adopting simplicial complexes over these other options?**
As mentioned in Section 3, our model adopts a self-attention mechanism as the READOUT function. It summarises the words (0-simplexes, nodes) and the connections between words (1-simplexes, edges).
Self-attention (Vaswani et al., 2017) and jump self-attention (Zhou et al., 2022) are able to process the input sequence by allowing every element in a sequence to attend to every other element, regardless of their distance in the sequence. Hence, long-range information is captured. However, the attention scores calculated between two words are still pair-wise interactions, and the higher-order information is only accounted for if self-attention layers are stacked. Although the jump self-attention algorithm introduces the mix-hop mechanism, the number of hops $j$ participating in this mechanism requires additional hyperparameter tuning. In contrast, C-SCN involves a single layer of convolutional networks and incorporates multiple higher-order objects, including 0-simplexes, 1-simplexes, and 2-simplexes, without requiring additional hyperparameters.
_Vaswani et al. (2017). Attention is all you need. In NIPS._
_Zhou et al. (2022). Jump Self-Attention: Capturing High-Order Statistics in Transformers. In NIPS._
**(W3) The paper puts effort into introducing the message-passing mechanism. How is it integrated into convolutional networks, and what is the structure of the SCN? A figure may be a helpful illustration.**
We have included the detailed message-passing mechanism in Section 3, and Figure 3 demonstrates the model architecture.
**(W4) What is the TDL mentioned in the introduction section? Please give a reference.**
Topological Deep Learning (TDL) bridges the gap between topology and machine learning, offering powerful tools to model complex data structures and relationships. By incorporating topological insights, it enhances the interpretability, robustness, and performance of deep learning models across a wide range of applications. Clough et al. (2022) adopted TDL to improve image classification, object detection, and segmentation by capturing the geometric and topological structure of images and utilising persistent homology to extract shape-based features that complement traditional pixel-based methods. Shen et al. (2024) incorporated both high-order interactions and multiscale properties into a TDL architecture, specifically simplicial neural networks, where polymer molecules are represented as a series of simplicial complexes at different scales to enhance the accuracy of polymer property prediction.
_J. R. Clough et al. (2022). A Topological Loss Function for Deep-Learning Based Image Segmentation Using Persistent Homology. in TPAMI._
_Shen et al. (2024). Molecular Topological Deep Learning for Polymer Property Prediction. In arXiv._
**(W5) Can the simplicial complexes be used in Transformer?**
Yes, simplicial complexes can be used in Transformers to incorporate higher-order relationships and geometric structures into the model. While traditional Transformers operate on sequences or graphs, simplicial complexes extend these structures by modelling higher-order interactions (e.g., triangles, tetrahedra) rather than just pairwise relationships. This can enhance the model's ability to capture complex dependencies in data. For example, Cellular Transformer (CT) (Ballester et al., 2024) leverages algebraic topology to form cell complexes from graph data and inputs to the transformer model, where the accuracy of graph classification is enhanced.
_Ballester et al. (2024). Attending to Topological Spaces: The Cellular Transformer. In arXiv. _
**(W6) In the era of LLMs, models like ChatGPT, DeepSeek, and LLaMA have already achieved strong performance in short-text classification. For closed-source LLMs, we can use APIs and Apps, while for open-source LLMs, we can fine-tune them on local machines. Why not use these LLMs for short-text classification?**
Due to the word limit constraints, we refer to the response to Reviewer Ntsf (W1) for similar concerns.
**(W7) In the loss function in Eq. 12, both the classification loss and contrastive loss use features generated from the BERT model. It is difficult to justify whether the proposed SCN encoder contributes, and to what degree, to the performance.**
We have included a comparison between with and without contrastive loss in the Appendix section, where the loss functions are utilised for SCN and BERT separately, and the contribution of SCN is highlighted. | Summary: The paper proposes Contrastive Learning with Simplicial Convolutional Networks (C-SCN) for short-text classification. The method constructs document simplicial complexes to capture higher-order interactions beyond simple pairwise relationships and integrates a contrastive learning framework that leverages both structural representations from the simplicial convolutional network and sequential representations from transformer models. The authors demonstrate improvements on several benchmark datasets in few-shot learning settings.
Claims And Evidence: The authors claim that their approach can better capture long-range and higher-order interactions in short texts compared to traditional graph-based models, leading to enhanced classification performance. However, the evidence provided is not fully convincing. In particular, the paper does not adequately justify the necessity of a complex graph-based model when simple large language models (LLMs) could potentially address these issues. Additionally, the baselines used are outdated, and the absence of comparisons with current LLM-based methods weakens the evidence supporting the proposed claims.
Methods And Evaluation Criteria: While the construction of document simplicial complexes and the integration of contrastive learning are interesting, the methods lack a compelling motivation. The paper does not sufficiently explain why a graph-based model is preferred over simpler LLMs (e.g. llama 3), especially when techniques like skip connections or graph rewiring can mitigate issues related to high-order dependencies in GNNs. Moreover, the evaluation criteria are standard for short-text classification, but the outdated baseline comparisons reduce the impact of the reported improvements.
Theoretical Claims: NA.
Experimental Designs Or Analyses: As discussed in the previous section, LLM baselines are needed.
Supplementary Material: NA
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: LLM-related papers.
Other Strengths And Weaknesses: The presentation quality can be improved.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **(W1) The paper does not adequately justify the necessity of a complex graph-based model when simple large language models (LLMs) could potentially address these issues.**
We would like to highlight the novelty of our work in pioneering the use of higher-order simplicial complexes for short text classification, offering a novel geometric perspective on text representation and classification, and advancing the theoretical understanding of higher-order simplicial complexes in text classification. This lays the groundwork for future research in geometric machine learning. Unlike black-box LLMs, our approach leverages higher-order simplicial complexes to pave the path towards interpretable insights into the relationships between short texts through the higher-order simplexes. In addition, with a significantly smaller trainable parameter set, our method is particularly effective in resource-constrained environments, providing a lightweight alternative to computationally intensive large language models (LLMs).
Due to time constraints, we obtained the results from Llama3.1-8B with two datasets, as shown in the table below.
|Model||Twitter||MR|
|--------|--------|--------|--------|--------|
||F1|Acc|F1|Acc|
|BERT|54.92|51.16|51.69|50.65|
|GPT2|67.41|67.76|50.77|53.18|
|RoBERTa|56.02|52.29|52.55|51.3|
|Llamma3-8B|34.06|50.3|70.13|71.67|
|C-SCN|75.61|76.09|69.46|69.87|
We can identify that Llama3, with extensive pre-training and a large parameter count of 8B, can handle longer texts with proper English in movie reviews more effectively, as shown by its improved performance on the MR dataset. On the other hand, in the Twitter dataset, which features shorter texts and non-standard English words, Llama3 may not perform well in the few-shot setting. For example, we found that Llama 3 treats both “:(” and “:)” as negative sentiments and classifies non-proper English words, such as “thankyou” and “followback”, in the negative category, resulting in worse performance. In contrast, C-SCN leverages both structural information and contextual information to prevent the pre-trained embeddings from dominating the sentence representations and manages to handle these cases better with the same number of examples and much fewer parameters, demonstrating particular strength in domains where short text data exhibits complex relational structures, which are naturally modelled by higher-order simplicial complexes. While LLMs achieve higher overall performance in one dataset, our approach introduces a novel framework that opens new avenues for exploring geometric representations in text classification.
We will consider the impact of text lengths and grammatical correctness in future work, as short texts from online resources, including tweets and reviews, may pose challenges to short text classification tasks.
**(W2) The baselines used are outdated, and the absence of comparisons with current LLM-based methods weakens the evidence supporting the proposed claims.**
The baseline models we selected also included those after 2020, including DADGNN (2021), SHINE (2021), NC-HGAT (2022), besides GIFT (2024).
**(W3) The paper does not sufficiently explain why a graph-based model is preferred over simpler LLMs (e.g. llama 3), especially when techniques, like skip connections or graph rewiring, can mitigate issues related to high-order dependencies in GNNs.**
In addition to the comparison with LLM in the previous question, skip connections (Xu et al., 2021) and graph rewiring (Topping et al., 2022) are introduced to mitigate the over-squashing problem of GNNs and to incorporate higher-order dependencies. However, skip connections combine graph features at different scales and introduce complex interactions among layers. This may diminish the effectiveness of shallow network structures and introduce architectural complexity. Furthermore, graph rewiring may compromise the sparsity of graphs (Barbero, 2024). In contrast, C-SCN maintains the original graph structure with a shallow layer and incorporates the higher-order dependencies with contrastive learning in one pipeline of training, demonstrating its effectiveness and efficiency in the few-shot learning setting.
_Xu, K., Zhang, M., Jegelka, S., & Kawaguchi, K. (2021). Optimization of Graph Neural Networks: Implicit Acceleration via Skip Connections and Increased Depth. CoRR, abs/2105.04550._
_Topping, J., Di Giovanni, F., Chamberlain, B. P., Dong, X., & Bronstein, M. M. (2022). Understanding Oversquashing and Bottlenecks on Graphs via Curvature. In International Conference on Learning Representations._
_Barbero, F., Velingker, A., Saberi, A., Bronstein, M. M., & Di Giovanni, F. (2024). Locality-aware graph rewiring in GNNs. In The Twelfth International Conference on Learning Representations._
Due to the word limit, we would like to clarify more during the comment period.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response and providing additional results.
However, my concerns are not fully addressed:
1. My major concern still holds. I still believe that the proposed problem "leveraging higher-order structure", which explores the interactions across distant words, is already an explored problem in NLP community. Both BERT-style or GPT-style language models consider this kind of action.
2. Skip connection and rewiring are not only proposed to solve the oversquashing problem. In fact, skip connection is also introduced to handling high-order information before over-squashing is investigated. please check JKNet [Xu et al., 2018] for details.
3. Although I appreciate the effort of adding LLM results, the LLM compared is not large enough. The results are not convincing when GPT2 is better than LLM on twitter.
[Xu et al., 2018] Representation Learning on Graphs with Jumping Knowledge Networks. In ICML 2018.
Since my major concerns still hold. I would like to keep my score. | Summary: Due to limited labels and sparsity in words and semantics, short text caught much attention. Most of the current models adopted self-supervised contrastive learning across different representations but generate samples and external auxiliary information can not guarantee the effectiveness. And they also can not extract high-order information. The authors proposed a novel document simplicial complex construction for a higher-order message-passing mechanism. By contrasting the structural representation with the sequential representation generated by the transformer mechanism for improved outcomes and mitigated issues, the C-SCN model outperform existing models on four benchmark datasets.
Claims And Evidence: It sounds convincing, however, the authors do not provide solid proof.
Methods And Evaluation Criteria: From the results of four benchmark dataset, it looks convincing. However, more theoretical proof is demanded.
Theoretical Claims: This paper only provides definition and methodology. No theoretical proof.
Experimental Designs Or Analyses: The four selected benchmark datasets are commonly used. And the results show the model's effectiveness. However, the authors did not provide the source code.
Supplementary Material: I have not find the Supplementary Material.
Relation To Broader Scientific Literature: This paper have compared the proposed model with several baselines.
Essential References Not Discussed: Boosting Short Text Classification with Multi-Source Information Exploration and Dual-Level Contrastive Learning[J]. arXiv preprint arXiv:2501.09214, 2025.
Other Strengths And Weaknesses: Strengths
1. This work is well written and easy to follow.
2. The authors proposed a novel document simplicial complex construction for a higher-order message-passing mechanism.The C-SCN model outperform existing models on four benchmark datasets.
3. The results on the evaluation datasets demonstrate a remarkable performance improvement achieved by the proposed method.
Weaknesses
1. This paper have compared the proposed model with several baselines. They provides definition and methodology. However, more theoretical proof is demanded.
2. The authors did not provide the source code about the model.
3. We would like to see the author could clarify that whether the proposed model have solved the challenges discussed at the beginning.
Other Comments Or Suggestions: Please see the weaknesses.
Questions For Authors: It is an interesting topic. However, please clarify that whether the proposed model have completely solved the challenges discussed at the beginning.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **(W1) This paper has compared the proposed model with several baselines. They provide definitions and methodology. However, more theoretical proof is demanded.**
We would like to refer to the following sources for theoretical proof. To compare the expressiveness of graph neural networks and neural network structures that involve higher-order objects, such as simplicial complexes, the Weisfeiler-Lehman graph isomorphism test (WL test) (Weisfeiler & Lehman, 1968) is often used to compare different architectures.
Bodnar et al. (2021) extended the WL test with simplexes, known as Simplicial WL or SWL, by involving boundary and coboundary relations. The following theorem backs up the algorithm’s expressiveness.
**Theorem 1**. SWL is strictly more powerful than WL at distinguishing non-isomorphic graphs.
Due to the word limit, we would like to include detailed proof during the discussion phase if needed. The following theorem explains our motivation to adopt the framework in our model architecture.
**Theorem 2**. With sufficient layers and injective aggregators, Message Passing Simplicial Networks (MPSN) is as powerful as SWL.
_Boris Weisfeiler and Andrei Leman. The reduction of a graph to canonical form and the algebra which appears therein. Nauchno-Technicheskaya Informatsia, 2(9):12–16, 1968._
_Cristian Bodnar, Fabrizio Frasca, Yuguang Wang, Nina Otter, Guido F Montufar, Pietro Li´o, and Michael Bronstein. Weisfeiler and Lehman go topological: Message passing simplicial networks. In Proceedings of the 38th International Conference on Machine Learning._
**(W2) The authors did not provide the source code for the model.**
To enhance the reproducibility of our work, we have included the pseudo-code in the Appendix section. We would like to include the code as supplementary material when the edit function is enabled.
**(W3) We would like to see the author could clarify that whether the proposed model have solved the challenges discussed at the beginning.**
Our work identifies three key challenges at the beginning, and the proposed model architecture could address them individually.
_Challenge 1: The data augmentation step and the negative sampling step of contrastive learning may distort the semantic meaning and introduce unnecessary noise._
Removing graph components is adopted as a data augmentation strategy; however, this approach may disrupt the original meaning of the text. An instance from the Movie Review (MR) dataset: “There's not enough to sustain the comedy” while removing the word “not” reversely changes the meaning of this short sentence.
In C-SCN, we did not create the positive and negative labels that are used in pre-training for the text data. Instead, we provided augmented views of texts by applying the structural SCN and sequential language models. This has circumvented the challenges of generating distorted semantic meanings in the positive and negative classes.
_Challenge 2: Auxiliary information, such as entities, latent topics, and part-of-speech (POS) tags (e.g., nouns and verbs), may be added to graph models for language understanding and enriching the limited available local context. However, this step might introduce misinformation, such as pulling documents that express opposite semantics but share similar topics._
Assigning entities, such as a film name, might pull texts that complement and dislike the film into a closer neighbourhood. By the homophily assumption, which states that nodes with similar characteristics or labels tend to be connected, the model learns that the two texts are more similar to each other compared to other texts that do not mention the entity name but share similar semantics.
In C-SCN, we did not include any auxiliary information that might mislead the model in learning neighbourhood information related to entities, latent topics, or POS tags. Instead, we leverage the pre-trained information from the sequential language model BERT in the augmenting step to enhance the contextual understanding when contrasting with SCN.
_Challenge 3: Graph models are mathematically limited in modelling higher-order features, such as group-wise interactions among a few nodes and edges expressed in terms of phrases._
The short sentence “It is what it is” uses repetition to emphasise the acceptance of the status quo. At the same time, graph models with only nodes and edges learn pairwise interaction. They need to extend the number of layers in order for words to incorporate the meaning of other words further apart. Group-wise phrase “it is” needs to be linked with “what” to model such repetition.
In C-SCN, we adopted the simplicial complex to incorporate higher-order relations, which have undergone the message-passing mechanism. This enables the grouped phrase “it is” as an edge to be connected to the third node “what” through the 2-simplex (filled triangle) formed.
Due to the word limit, we would like to clarify more during the comment period. | null | null | null | null | null | null | null | null |
The Number of Trials Matters in Infinite-Horizon General-Utility Markov Decision Processes | Accept (spotlight poster) | Summary: This paper investigates how the number of trials affects policy evaluation in infinite-horizon general-utility Markov Decision Processes (GUMDPs) for both the discounted and average cases. For the discounted case, the authors demonstrate that a mismatch generally exists between the finite- and infinite-trial formulations and provide lower and upper bounds to quantify this discrepancy. For the average case, they show that the structure of the underlying GUMDP also influences the mismatch. Finally, experiments on three simple GUMDPs validate the theoretical findings.
### update after rebuttal
The authors' rebuttal has addressed my questions. I keep my positive rating.
Claims And Evidence: The claims made in the paper are well supported by clear proof and the experiment results.
Methods And Evaluation Criteria: The proposed methods and benchmarks make sense for the problem.
Theoretical Claims: I briefly checked the proof of Remark 3.1, 3.2, Theorem 4.1, 4.2 and 4.3. They look correct to me.
Experimental Designs Or Analyses: The experimental design follows previous work (e.g., the choice of GUMDPs), and the results align well with the theoretical findings.
Supplementary Material: I reviewed some proofs in the supplementary material.
Relation To Broader Scientific Literature: This work contributes to the study of GUMDPs, providing the first analysis of the impact of the number of trials.
Essential References Not Discussed: None
Other Strengths And Weaknesses: **Strengths:**
- This paper is very well-written, easy to follow, and self-contained. Sections 2 and 3 provide extensive background to prepare readers for the later derivations, and Section 4 is clearly structured to present the main results.
- This work makes novel contributions to the study of GUMDPs, providing the first analysis of the impact of the number of trials.
- The experimental results align well with the theoretical findings.
Other Comments Or Suggestions: * Line 377 (left): "Both $K$ and $H$ contribute to the tightness of the upper bound." However, under Theorem 4.3, the authors mention that "Finally, the upper bound does not get tighter as $H$ increases for fixed $K$." I may have missed something, but I would like to hear the authors' comments on this.
* Line 425 (left): It would be helpful if the authors could demonstrate how the equality holds for $\mathcal{M}_{f,2}$.
Questions For Authors: Please see comments to above questions
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive review, comments, and concerns raised. We answer below the questions/concerns raised:
- *"Line 377 (left): "Both $K$ and $H$ contribute to the tightness of the upper bound." However, under Theorem 4.3, the authors mention that "Finally, the upper bound does not get tighter as $H$ increases, for fixed $K$." I may have missed something, but I would like to hear the authors' comments on this."*: The aim of our work is to show that the number of trials, $K$, matters. In the context of discounted GUMDPs, to analyze the mismatch between the finite and infinite trials settings, we introduced a truncated estimator (equation 5) that depends on $K$, the number of trajectories, but also on $H$, the length of the sampled trajectories. With our comment after Theo. 4.3. ("the upper bound does not get tighter as $H$ increases, for fixed $K$"), we wanted to emphasize that the number of trials $K$ is the key parameter regulating the tightness of the bound, i.e., even if $H$ is "very high" so that the bias term $2\gamma^H$ is very small, if $K$ is low then we expect the upper bound to be loose; then, if we gradually increase $K$, the upper bound tightens with a rate of $1/\sqrt{K}$. I.e., setting $H$ to be high alone will not make the upper bound tight. However, naturally, for a fixed $K$, increasing $H$ makes the bias of the estimator to decrease and, hence, we write while commenting our experimental results that "both $K$ and $H$ contribute to the tightness of the upper bound.". However, this should not distract us from our key objective, which is to show that $K$, the number of trials, matters irrespective of the value of $H$. Our experimental results illustrate this (Fig. 2): for any fixed $K$, increasing $H$ decreases the bias of the estimator, but the gap between the infinite and finite trials formulations only disappears if $K$ is sufficiently high (as well as $H$ so that the bias is low). We commit to making this clearer in the final version of our manuscript.
- *"Line 425 (left): It would be helpful if the authors could demonstrate how the equality holds for $\mathcal{M}_ {f,2}$."*: The intuition is that, for $\mathcal{M}_{f,2}$, the distribution of initial states is such that the agent always starts in $s_0$. Now, if the policy is stochastic in both states $s_0$ and $s_1$, it happens that the Markov chain induced by such a stochastic policy comprises a single recurrent class ($s_1$ is always reachable from $s_0$ and vice versa) and, hence, there exists no mismatch between the infinite and finite trials objectives (in light of our Theo. 4.4). Finally, for any deterministic policy, it happens that the agent always gets absorbed with probability one to the same recurrent class and, thus, the mismatch between the finite and infinite trials formulations fades away. More precisely, the only case where it is possible to have two recurrent classes is when the deterministic policy selects the right action in state $s_0$ and the left action in state $s_1$. However, since the agent always starts in $s_0$, recurrent class {$s_1$} is unreachable and, hence, there exists no mismatch between the finite and infinite trials formulations. This result agrees with our Theo. 4.6 since the probability of getting absorbed into one of the recurrent classes is one and the probability of getting absorbed to the other is zero and, hence, the lower bound is zero. We commit to making this clearer in the final version of our manuscript.
We hope our answers addressed the reviewer's main concerns. | Summary: The paper analyzes the impact of the number of trails in estimating the objectives for GUMDPs. For both the discounted and average settings, it is shown by examples that there are mismatches between the finite-trial estimates and the actual infinite-trail objectives. Bounds on the mismatches are provided, with numerical results supporting the theoretical claims.
Claims And Evidence: - Theorem 4.1 shows by an example that there is mismatch for the discounted setting in general.
- Theorem 4.2 and 4.2 provide lower and upper bounds on the mismatch for the discounted setting.
- Theorem 4.4 claims that there is no mismatch in the average setting if the problem is unichain under any policy.
- Theorem 4.5 shows by an example that there is mismatch for the average setting in general.
- Theorem 4.6 provides a lower bound on the mismatch for the average setting given the set of recurrent classes.
- The existence of mismatches and their trends are illustrated in the numerical results.
Methods And Evaluation Criteria: Proofs are provided for the theorems.
Theoretical Claims: The proofs and proof sketches in the main text seem correct, though not checking all details in the supplementary.
Experimental Designs Or Analyses: Three simple GUMDPs are considered in the numerical experiments. Although they are simple problems, the results illustrate the mismatches in objective estimations. The noiseless vs noisy results provide an interesting observation for convergence and non-convergence of the mismatches when the problem is unichain or not.
Supplementary Material: Quickly went through some proofs, but not in too much details.
Relation To Broader Scientific Literature: From the results of this work, we should be cautious when doing finite-sample policy evaluation.
Essential References Not Discussed: None
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: None
Questions For Authors: In Fig 3b, it looks like there are some discontinuities in the performance of $M_{f, 3}$ around $\gamma=0.9$ where the finite-trail performance seems to diverge away from the infinite-trail one, but then converges back to it. Is that expected from theoretical analysis?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive review, comments, and concerns raised. We answer below the questions/concerns raised:
- *"In Fig 3b, it looks like there are some discontinuities in the performance of around where the finite-trail performance seems to diverge away from the infinite-trail one, but then converges back to it. Is that expected from theoretical analysis?*": In Fig 3 (b), in the right-most plot, we agree that there appear to exist some kind of discontinuities since, as $\gamma$ increases, some curves appear to start deviating more and more from the dashed line (infinite trials curve), eventually converging back together as $\gamma \approx 1$. We carried out a detailed analysis as to why this is the case and realized that this happens due to a particular interaction between the occupancy induced by the policy as we increase $\gamma$ and the objective function. Whether these discontinuities may appear depends on the particular GUMDP instance at hand and the policy considered. For example, as seen in Fig. 3 (b) for the other two GUMDPs, the trend is different as the mismatch fades away as $\gamma$ increases. Nevertheless, all the results in Fig. 3 (b) agree with our theoretical results: (i) for the discounted case $\gamma <1$, there exists, in general, a mismatch between the finite and infinite trials formulations, which fades away as the number of trajectories $K$ increases; and (ii) for the average setting ($\gamma \approx 1$), the finite and infinite trials formulations are equivalent since the noisy transition matrices enforce that the underlying GUMDPs have a single recurrent class (hence, being unichain) under the considered policies. Unifying the analysis of the discounted and average settings is beyond the scope of our work, and we envision several technical challenges will arise; even for the case of linear objectives/MDPs, such analysis can be challenging (e.g., the theory of Blackwell optimality).
We hope our answers addressed the reviewer's main concerns. | Summary: This paper analyzes the impact of the number of sampled trajectories on the estimation of the return for infinite-horizon MDPs with a general utility function. The classical MDP setup corresponds to linear utility, and in this case there is no bias induced by considering only a finite number of sampled trajectory - the main result of this paper is to show that this does not hold for the case of general utility. The paper highlights this situation for the case of policy evaluation, and provides numerical experiments.
## update after rebuttal
I thank the authors for taking the time to respond to my comments. My overall assessment of the paper is positive and I will keep my score.
Claims And Evidence: The proofs of the claims of this paper are sketched in the main body of the paper. More detailed proofs are provided in appendices.
Methods And Evaluation Criteria: The numerical experiments are restricted to a very simple setup (an MDP instance with a handful of states, as illustrated in Figure 1) but it suffices to provide a clear picture of the effects studied in this paper.
Theoretical Claims: I checked the sketch of the proofs in the main body and they appear correct.
Experimental Designs Or Analyses: As detailed above, I am fine with the numerical experiments. Perhaps it would have been nice to have studied a more structured/realistic MDP instance but the paper already does a great job on the theory side.
Supplementary Material: N/A
Relation To Broader Scientific Literature: This paper is related to the growing literature on general utily MDPs. The paper does a good job at providing a concise literature review. To some extent the paper is based on demonstrating that a suggestion from another paper (Mutti et al. 2023) is wrong (line 61: *Finally, the authors suggest that the difference between finite and infinite trials fades away under the infinite-horizon setting*.)
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: * Strengths: well-written, mathematical sound, good literature review
* Witness: the scope of the topic covered in the paper is quite restricted - only policy evaluation, and only the impact on the number of trials. Some findings appear trivial when stated in English, e.g. line 78: ``*We show, both theoretically and empirically, that the agent’s performance may depend on the number of infinite-length trajectories drawn to evaluate its performance*”.
Other Comments Or Suggestions: * Why is there a minus sign in the definition of the expected discounted cumulative reward $\langle d_{\gamma,\pi}, -r\rangle$? Why not defining the optimization program on page 2 (arg min …) as a maximization? Is it because you’ll use convex functions $f$ afterward? Also, it should be $\pi* \in \arg \min$ instead of $\pi^* = \arg \min$ since $\arg \min$ is the set of all possible optimal policies.
* Redundant definition of item (i) before and after equation (3)
* In equation (3), is the minimum always attained, even among Markovian policies $\Pi_{M}$? Shouldn’t it be an $\inf$? Also, can you provide refernece here again for the optimality of stationary policies for the case of discounted GUMDPs + average unichain GUMDPs?
* Please recall the definition of $\zeta_\pi$ (line 194 + 200 + 210).
* Line 212: is the convergence $f_{K,H} = f_{K}$ as $H\rightarrow + \infty$ uniform over $\pi$?
Questions For Authors: N/A --- see my comments above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive review, comments, and concerns raised. We answer below the questions/concerns raised:
- *"Minus sign in the definition of the expected discounted cumulative reward*": We focused on the case of minimization (hence, convex $f$), similarly to what has been done in previous works (e.g., [1]). It is equally possible to consider the problem of maximization (hence, concave $f$), as, naturally, both formulations are equivalent.
- *"It should be* $\pi^* \in \arg \min $ *instead of* $\pi^* = \arg \min $ *since is the set of all possible optimal policies."*: We agree with the reviewer and will fix this in the new version of the manuscript.
- *"Redundant definition of item (i) before and after equation (3)"*: We thank the reviewer for the suggestion. We will incorporate it into the final version of our manuscript.
- *"In equation (3), is the minimum always attained, even among Markovian policies?"*: Under the average multichain setting, the set of possible occupancies attained by all Markovian (possibly non-stationary) policies corresponds to the closed convex hull of a given set of points (Theo. 8.9.3. in [2]). Thus, the minimum is always attained.
- *"Also, can you provide a reference here again for the optimality of stationary policies for the case of discounted GUMDPs + average unichain GUMDPs?"*: We thank the reviewer for the suggestion. We will add a reference to the results in [2] that are of relevance.
- *"Recall the definition of $\zeta_\pi$"*: We thank the reviewer for the suggestion. We will incorporate it into the final version of our manuscript.
- *"Line 212: is the convergence $f_{K,H} = f_K$ as $H \rightarrow \infty$ uniform over $\pi$"*: Yes.
We hope our answers addressed the reviewer's main concerns.
[1] - Zahavy, T., O’Donoghue, B., Desjardins, G., and Singh, S. - Reward is enough for convex mdps. CoRR (abs/2106.00661), 2021.
[2] - Puterman, M. L. Markov decision processes: discrete stochastic dynamic programming. John Wiley \& Sons, 2014.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for taking the time to respond to my comments. My overall assessment of the paper is positive and I will keep my score. | Summary: The paper continues work in the area of the so-called general utility MDPs. The main contribution is the clarification that infinite horizon criteria will not close the "finite vs infinite trial gap" contrary to a suggestion by Mutti et al. (2023) who studied the gap in the finite horizon setting.
Specifically, the setting is as follows: Consider a decision maker who wants to pick a policy such that in $K$ independent trials, the expected utility assigned to the average of the empirical occupancy measures underlying $K$ infinitely long trajectories is maximized. Formally, if $d_k$ is the empirical occupancy measure (discounted or average) underlying trajectory $1\le k \le K$,
$\bar{d}_K = (d_1+\dots + d_K)/K$
and the goal is to maximize $\mathbb{E}_\pi[ f(\overline{d}_K) ]$ by choosing an appropriate policy $\pi$. Here, $f$ maps distributions over state-action pairs to reals. One motivation to study this problem is to model risk-sensitive decision makers whose objective can be captured by a utility assigned to a state-action distribution.
As $K\to \infty$, since under $\mathbb{P}_\pi$, $d_1,\dots,d_K$ are independent, by the law of large numbers,
$\bar{d}_K $ converges to
$\mathbb{E}_\pi[ d_1 ]$
with probability one as $K\to \infty$. Hence, by Taylor series expansion, when $f$ is smooth, $f(\overline{d}_K)$ converges to
$f(\mathbb{E}_\pi[ d_1 ])$
with probability one as $K\to\infty$. The gap that the paper refers to is the difference between $\mathbb{E}_\pi[ f(\overline{d}_K) ]$ and this latter expression.
The first result concern the discounted setting. Here, the first main result (Theorem 4.2) shows that the gap is lower bounded by an $\Omega(1/K)$ term for $f$ strongly convex. The second main result (Theorem 4.3) shows that even if we truncate trajectories after $H$ steps and $f$ is convex Lipschitz, the gap is upper bounded by $O(\sqrt{1/K}+\gamma^H)$. The first result only applies to stationary policies.
In the average cost setting, focusing on stationary policies, the first result (Theorem 4.4) shows that the gap is zero for unichain MDPs provided $f$ is continuous and bounded. Next, the authors point out this does not necessarily hold for multichain MDPs (Theorem 4.5) and finish up with a general lower bound on the gap that is similar to Theorem 4.2 that applies to multichain MDPs (Theorem 4.6).
Finally, empirical results illustrate the theoretical findings.
## update after rebuttal
My assessment did not change after the rebuttal. The paper answers a well-defined question, asked in a previous paper, and is well-written. It is of interest for the community in the sense that the precursor paper hinted at some answer and this paper clarifies in a rigorous fashion whether that hint was correct. Yet, this question raised in the previous paper does not feel like a well thought out question -- as I explained it beforehand. Because of the merits of the paper, I still recommend weak accept -- take the paper if there is enough room.
Claims And Evidence: The general claim is that just because the horizon is infinite, the gap will not disappear. Theorems 4.2 and 4.6 make this precise. In addition, the empirical study also suggest that the gap exist. I think overall the evidence is strong.
Methods And Evaluation Criteria: The empirical study is reasonable and the results do make sense in the context of the paper.
One small methodological remark is that the authors somehow say they ran experiments with $H=infty$. How can this be? I suppose this was just approximated by taking $H$ large. Is this convincing? Well, Fig 2. suggests that this is a reasonable approach, but ultimately, I think it is better to admit that there is no way to study this empirically (or am I missing something)?
Theoretical Claims: The claims are precise, they are believable. Theorem 4.3 (upper bound for discounted setting) is quite predictable; the strength is in the lower bounds. None of the results are surprising, but the lower bounds required some careful calculation. Skimming through the proofs of these, I believe they are correct.
Experimental Designs Or Analyses: No issues; it is OK to run on these small MDPs; the plots are reasonably chosen.
Supplementary Material: I skimmed through the supplementary, spending a little more time on the lower bound proof main steps.
Relation To Broader Scientific Literature: There is a sequence of papers studying these problems and the literature is very well cited as far as I know.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: **Originality**: Fair. The results have not been known, published. I guess the lower bounds are neat.
**Significance**. Questions I would ask:
Q1) Should we care about the finite trial objective?
Q2) Should we care about the gap described?
Q3) Are we surprised that there is this gap?
On Q1: While the finite trial objective with finite-horizon problems moves things closer to realism, the finite trial objective with infinite-horizon problems lacks realism: Making infinitely maps in the MDP is just not something that is going to happen in our world with finite resources and fixed speed execution. So why study this problem? Of course, the follow-up to this is to ask why study the infinite horizon setting to start with and the answer there is that the infinite horizon setting is chosen for elegance, generality, insight, and is an idealization and as such it has its own role. Yet, idealization moves against realism, so we get a contradiction.
Another answer to Q1 (and answer to Q2, as well) is that the finite trial objective appears to lead to "hard" problems. It is OK to point this out, but is this very exciting? Will we get algorithms that perform significantly better under the finite trial objective than if we instead consider the infinite trial objective? I have some doubts; the results show that under reasonable conditions the algorithms in the latter category will pay a price of size $1/\sqrt{K}$. The constant may be large here, but it is hard to imagine practical scenarios where $K$ is small and the decision maker knows enough to run their finite trial optimizer to do significantly better than what is predicted by this gap.
On Q3: The gap is not too surprising. It is not hard to realize that infinite length trajectories will often "do not wash out all randomness". I am guessing Mutti et al. thought of the one nice case when this happens, hence their comment. Is it worth clarifying this? In as much as we care about the finite trial objective, yes.
**Novelty**: The results are definitely novel; the tools used in the proofs are what one expects to see. Again, the lower bounds are a bit more interesting.
**Clarity**: The paper is superbly well written, except for skipping over the questions raised above on why should we even care given that the infinite horizon setting is not even realistic and how can you even run experiments in this setting (this second is a very minor issue).
Other Comments Or Suggestions: N.A.
Questions For Authors: Is there a way to justify looking at this problem given that the infinite horizon setting is an idealization and the whole motivation is to move closer to realism?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and concerns raised. We answer below the questions/concerns raised:
- *"Is there a way to justify looking at this problem given that the infinite horizon setting is an idealization and the whole motivation is to move closer to realism?"*: We agree and acknowledge that the reviewer raises a relevant point, which leads to an interesting discussion. Still, we argue that there are key differences between the finite and infinite-horizon formulations and, therefore, we should see them as being complementary to each other.
First, the infinite-horizon setting has been thoroughly considered by previous works in the literature of GUMDPs/convex RL (e.g., [1] considers both discounted and average settings). Therefore, it is of great interest to make the research community aware that the infinite-horizon GUMDPs framework hides some implicit assumptions (infinite trials assumptions), and that the performance of a trained agent may significantly deviate (as we study in this work) at test time in comparison to the expected one. We also highlight that such a mismatch between the finite and infinite trials has been overlooked by previous research, as we describe in our Introduction.
Second, we argue the infinite-horizon framework is inherently different from the finite-horizon one and, thus, both should be studied in the context of GUMDPs (and, particularly, in the finite-trials regime). This is because infinite-horizon formulations allow to induce certain orders of preference over policies that are not so easily induced under a finite-horizon formulation. For example, one may want to use discounting to tradeoff between earlier/later costs/rewards (or in the case of GUMDPs, between the "importance" of states that are visited earlier or later in the trajectories). Also, infinite-horizon discounted GUMDPs can be used to model finite-horizon tasks where the length of the episode is uncertain (Sec. 5.3 in [2]). Therefore, we strongly believe that previous research adopted infinite-horizon formulations not only for their potential to simplify analysis (e.g., in the discounted setting, stationary policies are optimal) but also because they fundamentally differ from finite-horizon formulations in problem modeling, as exemplified above. Moreover, infinite-horizon formulations do not always simplify the problem yet remain widely used—e.g., in MDPs, the analysis of the average setting is arguably more intricate than the finite-horizon case. In conclusion, we view finite- and infinite-horizon formulations as complementary, both warranting study.
- *"Should we care about infinite-finite trials gap described?"* We argue that yes, we should care. For example, let us consider the single-trial case ($K=1$), which is very relevant in real domains where the agent may only be able to interact with the environment once (e.g., a robot/agent only has one life). In this case, the gap between the finite and infinite trials formulations can be significant, as our results showcase (constant $1/\sqrt{K}$ will not play a role here as it equals one). Even though our focus in this paper was on the task of policy evaluation, we agree with the reviewer that policy optimization in the single-trial setting is harder than in the case of infinite trials. However, we conjecture and have some preliminary evidence that it is, indeed, possible to do much better than the infinite trials policy in a reasonably efficient fashion for the single-trial setting. We leave such a study for future work.
- *"On running experiments with $H = \infty$:"* As suggested by the reviewer, in practice, it is impossible to draw trajectories of infinite-length. With this in mind, when running our experiments, we tuned $H$ so as to mitigate the impact of trajectories' truncation in our experimental results. We did this by running our experiments with "sufficiently high" $H$ values and plotting the values of the objectives as we increase $H$ and looking at how fast the curves were stabilizing (similar to the curves in Fig. 2). This allowed us to have a good confidence that the picked $H$ makes the impact of trajectories' truncation negligible.
We hope our answers addressed the reviewer's main concerns.
[1] - Zahavy, T., O’Donoghue, B., Desjardins, G., and Singh, S. - Reward is enough for convex mdps. CoRR (abs/2106.00661), 2021.
[2] - Puterman, M. L. Markov decision processes: discrete stochastic dynamic programming. John Wiley \& Sons, 2014.
---
Rebuttal Comment 1.1:
Comment: Thanks, I appreciate the answers.
Some thoughts on the arguments brought forward in the rebuttal. First on whether the infinite-horizon setting is interesting.
* "has been thoroughly considered by previous works in the literature": Just because others previously studied something does not necessarily make it interesting. That they studied means the problem was interesting to them, but perhaps not for any real reason.. Anyhow, I know this argument is brought up a lot to justify relevance, but generally I find this a weak argument. (I am sure the authors are also aware of this).
* "mismatch between the finite and infinite trials has been overlooked by previous research": this does not seem to answer to the question whether the infinite horizon setting is interesting.
* "infinite-horizon framework is inherently different from the finite-horizon one" [..] hence it should be studied. Different does not make it interesting.
* "infinite-horizon discounted GUMDPs can be used to model finite-horizon tasks where the length of the episode is uncertain" this is a valid reason! perhaps related to all this pondering about why study the infinite-horizon setting: as far as I see the previous work that pointed out the finite trial gap -- in the finite horizon setting -- did not quantify the size of the gap there. In fact, looking at the proof of Theorem 4.2, the proof does not seem to use anything about whether the horizon is finite or infinite. This makes the paper perhaps a little more interesting. (Related to this: Theorems 4.2 and 4.6 seems to be special cases of something more general. Perhaps this is also worth to point out.)
* "infinite-horizon formulations do not always simplify the problem yet remain widely used" this does not answer the question either.
Second, on whether characterizing the size of the gap is interesting.
As suggested in the rebuttal, consider K=1. The gap in this case is large. Do we expect any *learning* algorithm to do well on the single trial objective? I guess we all would say no. This problem is "too hard". So the conclusion is perhaps that the single trial (or finite trial) objective, while it may look natural, will not be tractable in general. Knowing that the gap is large rules out some type of algorithms but it does not rule out others (for the single trial objective). So maybe as a first step towards understanding what can and cannot be done, one could analyze the gap (as done in this paper). | Summary: This paper focus on General-utility MDP (GUMDPs) generalizes the MDPs framework by considering convex objective functions $f$ of the occupancy frequency $d$ induced by a given policy. The authors analyze the discrepancy between the finite-trials formulation $f_K$ and the infinite-trials formulation $f_\infty$ in the context of infinite horizon discounted GUMDPs and average GUMDPs extending prior research by [1] that focus on finite horizon.
References:
[1] Mutti, Mirco, et al. "Convex reinforcement learning in finite trials." Journal of Machine Learning Research 24.250 (2023): 1-42.
Claims And Evidence: The claims made in the submission seems to supported clearly based on the proof sketch and their experiment.
(1) **Remark 3.2**, analogous to Theorem 1 of [1], establishes that for a convex function $f$, the finite $f\_K$ and infinite $f\_\infty$ trials formulations differs for both discounted infinite GUMDPs and average criteria GUMDPs. This result is derived using Jensen’s Inequality: $f\_\infty(\pi) = f( \mathbb{E}[ \hat{d}\_{\mathcal{T}\_K} ] ) \leq \mathbb{E}[ f(\hat{d}\_{\mathcal{T}\_K}) ] = f\_K(\pi)$. Furthermore, **Theorems 4.1** and **4.5** use GUMDPs example (1c) to demonstrate that for multichain GUMDPs with a strictly convex objective function $f$, the inequality $f_K > f_\infty$ holds strictly for infinite horizon discounted GUMDP and average GUMDP respectively.
(2) **Theorem 4.2** extends remark 3.2 assuming c-strongly convex function $f$ for infinite horizon discounted GUMDPs, to provide a lower bound on $f\_K - f\_\infty$ via Jensen's gap bound.
(3) **Theorem 4.3**, extending Theorem 2 of [1], considers infinite-horizon discounted GUMDPs with an L-Lipschitz function $f$. It establishes a high-probability upper bound on the difference between the practical finite $K$-trail $H$-horizon value function and the true infinite-horizon value function $| f\_\infty - f(d\_{\mathcal{T}\_K,H}) |$ with Boole's and Hoeffding's inequality.
(4) **Theorem 4.4** in contrast to Remark 3.2, proves that when the average GUMDP is unichain and the continuous objective function $f$ is bounded then $f\_K = f\_\infty$. This result is validated using the Ergodic theorem and properties of expectation.
(5) **Theorem 4.6** extends Lemma 3 of [1] to average GUMDPs, providing a lower bound on $f_K - f_\infty$ for c-strongly convex function $f$. The bound is expressed as the sum of the variance of the Bernoulli distribution of the recurrent states' probability.
References:
[1] Mutti, Mirco, et al. "Convex reinforcement learning in finite trials." Journal of Machine Learning Research 24.250 (2023): 1-42.
Methods And Evaluation Criteria: The authors support their claims with three simple GUMDP examples. The example in Figure 1c is well-explained, and Theorems 4.1, 4.5, and Figure 3 effectively illustrate the discrepancy between the finite and infinite trial formulations with this example.
However, the purpose of Figures 1a and 1b is unclear. While the figure caption states that all GUMDPs have deterministic transitions, the states in Figures 1a and 1b appear to have two distinct transitions each, raising ambiguity about whether these transitions correspond to different actions. Additionally, the authors assert that all GUMDPs in Figure 1 are multichain, but this is not explicitly justified. Providing a clearer explanation of what Figures 1a and 1b represent would help readers better understand their purposes.
Theoretical Claims: It is unclear how the Hoeffding style inequality in the proof sketch and the full proof of Theorem 4.3 follows from Lemma 16 in [1], that proves the feasibility of the optimal policy. A more explicit explanation or a clearer connection between these results would help clarify the derivation.
References:
[1] Efroni, Y., Mannor, S., and Pirotta, M. Exploration-exploitation in constrained mdps. CoRR, abs/2003.02189, 2020.
Experimental Designs Or Analyses: The authors' experiments primarily focus on simple tabular examples with varying values of $K$ trails and $\gamma$ discount, and these seem to be correct. However, the analysis is limited to small problems, and it would be beneficial to see how the value function mismatch in larger domains.
Supplementary Material: I have reviewed the proofs in the supplementary material and did not find any inconsistencies or issues beyond the concerns raised in the **Theoretical Claims** section regarding a reference used in the proof of Theorem 4.3.
Relation To Broader Scientific Literature: Although this work is similar to [1], which focuses on finite-horizon GUMDPs, it provides valuable insights into the convergence properties of infinite-horizon discounted GUMDPs and average GUMDPs.
References:
[1] Mutti, Mirco, et al. "Convex reinforcement learning in finite trials." Journal of Machine Learning Research 24.250 (2023): 1-42.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Other Weakness:
1. Note that this paper is an extension of the finite horizon work in [1] to analysis of infinite horizon discounted and average GUMDPs. The main weakness of the paper lies in its novelty, as it appears quite similar to [1] and uses many of the same techniques to derive its theorems. The paper does not clearly specify which aspects of the extension from [1] are particularly challenging.
Other Strengths:
1. The paper does offer a new insight in Theorem 4.4, showing that unichain GUMDPs satisfy $f\_K = f\_\infty$ in average GUMDP.
2. The paper is well-written, with good structure and readability.
References:
[1] Mutti, Mirco, et al. "Convex reinforcement learning in finite trials." Journal of Machine Learning Research 24.250 (2023): 1-42.
Other Comments Or Suggestions: The paper would be in a stronger position if it explicitly specified which aspects are identical to [1] and which parts of the extension present significant challenges. As it stands, it closely resembles [1] and applies similar techniques to extend the analysis from finite-horizon to infinite-horizon discounted and average GUMDPs.
References:
[1] Mutti, Mirco, et al. "Convex reinforcement learning in finite trials." Journal of Machine Learning Research 24.250 (2023): 1-42.
Questions For Authors: Please address the questions :
(1) Which aspects are identical to [1] and which parts of the extension present significant challenges.
(2) The figure caption states that all GUMDPs have deterministic transitions, yet Figures 1a and 1b depict states with two distinct transitions each. It would be helpful to clarify what these transitions represent.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and concerns raised. We answer below each of the questions/concerns raised:
- *"(1) Which aspects are identical to [1] and which parts of the extension present significant challenges.":* We thank the reviewer for asking about the differences between our article and [1], particularly regarding the technicalities of our results and proofs. Our analysis fundamentally differs, from a technical point of view, from that in [1] where the authors consider the finite-horizon case. The key difference is due to the fact that discounted and average occupancies are inherently different than occupancies induced under the finite-horizon setting. The fact that the occupancies are defined over trajectories of infinite length forced us to seek alternative ways to prove our results. This is because we are mostly worried about the long-term behavior of the Markov chain induced by the policy, and not focused on a fixed number of interaction steps as considered in [1].
The kind of results we provide also significantly differ from those in [1]: we are the first work to provide lower bounds on the mismatch between the finite and infinite trials objectives, as [1] only proves upper bounds in the context of policy optimization. Also, while [1] shows that, in general, there exists a mismatch between the finite and infinite trials formulations under finite-horizon settings, we prove a result that states that under certain conditions (the GUMDP being unichain), there exists no mismatch. Hence, the nature of our results and those in [1] greatly differ.
Finally, although Theo. 4.3. is indeed related to Theo. 2 in [1], we focused on the policy evaluation case and had to incorporate the fact that the trajectories are discounted. All our other theorems (Theo. 4.1, 4.2, 4.4, 4.5, and 4.6) are not connected at all to any result in [1], relying on significantly different proof techniques and lines of reasoning than those considered in [1]. For example, Theo. 4.2 and 4.6 rely on the (quite general) strongly convex assumption that was not considered in [1]; Theo. 4.4. relies on multiple Lemmas (Appendix C.2.) related to the study of the ergodicity of the Markov chain induced when conditioning the GUMDP with a given policy.
We commit to clarify in the final version of the article the novel aspects of our work in comparison to [1].
- *"(2) The figure caption states that all GUMDPs have deterministic transitions, yet Figures 1a and 1b depict states with two distinct transitions each. It would be helpful to clarify what these transitions represent."*: We apologize for any confusion our visual depiction of the GUMDPs may have caused. We will improve this in the final version of our paper. Essentially, the GUMDPs we depict in Fig. 1 all have deterministic transitions. In Figs. 1 (a) and 1 (b), the different transitions coming out of the state nodes encode the transitions associated with each of the two actions (we commit to explicitly adding the action labels to the arrows to make it clearer). Finally, in the experimental section of our work (Sec. 4.3), we considered stochastic variants of the GUMDPs depicted in Fig.1. We consider the same GUMDPs as depicted in Fig. 1, but add a small amount of noise to the transition matrices so that there is a non-zero probability of transitioning to any other arbitrary state. Naturally, for the case of noisy transitions the visual depiction of the GUMDPs would be slightly different as the transitions are now stochastic, but we decided to present only the illustration of the GUMDPs for the case of deterministic transitions to make the illustrations easier to parse.
We hope our answers addressed the reviewer's main concerns.
---
Rebuttal Comment 1.1:
Comment: I have reviewed the authors' response and appreciate their clarifications. Based on their rebuttal, we have adjusted our recommendation accordingly.
Although the paper is not groundbreaking in terms of novelty, primarily extending finite-horizon GUMDP analysis to discounted infinite-horizon and average criteria. However, it is worth to note that they employ distinct proof techniques and steps to establish their bounds and conclusions due to the distinction of the occupancies for their settings. Additionally, the paper provides valuable insights, notably showing that the mismatch vanishes when the average GUMDP is unichain and the continuous objective function is bounded. | Summary: This paper analyzes infinite-horizon GUMDPs, highlighting the critical role of the number of sampled trajectories in policy evaluation. The authors theoretically and empirically demonstrate that, unlike standard MDPs, GUMDP performance metrics significantly depend on the number of trials, presenting theoretical lower and upper bounds for both discounted and average settings. They establish conditions under which finite and infinite trial objectives mismatch, particularly emphasizing multichain versus unichain structures.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: The paper assumes strongly convex and continuous utility functions with bounded domains. While mathematically convenient, realistic utility functions might violate these assumptions, potentially limiting theoretical findings’ practical relevance.
Experimental Designs Or Analyses: Experiments rely solely on illustrative toy scenarios with deterministic and artificially simplified state-action transitions. This makes the empirical validation questionable for practical GUMDPs in complex environments.
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper extends and significantly clarifies the conditions under which finite versus infinite trials matter in GUMDPs, directly complementing prior theoretical work. It effectively highlights differences in utility evaluations across trial conditions, an issue previously underexplored in the literature.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: See other parts.
Other Comments Or Suggestions: See other parts.
Questions For Authors: 1. Have the authors considered how their theoretical results hold when relaxing assumptions on utility function convexity or continuity, which might often be violated in real-world scenarios?
2. Could the authors provide additional empirical analysis with more realistic, stochastic environments (beyond the simplistic toy examples), to demonstrate clearly the practical significance of their theoretical bounds?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and concerns raised. We answer below the questions/concerns raised:
- *"Have the authors considered how their theoretical results hold when relaxing assumptions on utility function convexity or continuity, which might often be violated in real-world scenarios?''* We thank the reviewer for raising the discussion on how our assumptions may limit the generality of our results. Our assumptions remain valid under the great majority of objectives in the convex RL/GUMDPs literature and, hence, our results readily apply to many different tasks such as maximum state entropy exploration or imitation learning. We address each of our assumptions below:
1. **Strongly convex objectives:** We require the strong convexity assumption to prove the lower bounds of Theo. 4.2 and 4.6. All objective functions we consider in our work, which we believe to be representative of common objective functions used in the literature (e.g., maximum state entropy exploration, imitation learning, etc.), satisfy this assumption. We refer to Appendix C for the proofs of strong convexity. Naturally, if the objective is linear, as is the case for standard MDPs/RL, the objective function is not strongly convex; **however**, it should be clear that in such a case there does **not** exist a mismatch between the finite and infinite trials objectives and, thus, proving bounds on the mismatch between the objectives is meaningless. Finally, if our convexity assumptions are not met (e.g., the function is non-convex), we conjecture that the mismatch between the finite and infinite trials continues to exist in general, but we leave the study of such a case for future work.
2. **Continuous and bounded objectives:** We use the assumption that the objective is bounded and continuous to prove Theo. 4.4 and 4.6, which rely on the bounded convergence theorem. All objective functions we consider in our work are continuous and, up to the authors' knowledge, this assumption is satisfied for every objective function previously considered in the literature of GUMDPs. Thus, we believe our assumption is as general as it can be. Regarding the boundedness of the objective function, several objective functions (e.g., entropy and norm-based objectives) satisfy this assumption, as we state in our article. In the case of imitation learning tasks with KL-divergence-based objectives, as stated in the text before Theo. 4.4., we need to ensure that occupancy $d_\beta$ (the occupancy induced by the policy we want to imitate) is lower-bounded to meet our assumption. Nevertheless, we argue that: (i) this assumption is commonly used by previous theoretical works; and (ii) in practice, one can ensure the distribution is lower-bounded by adding a small value ($\epsilon$) to every entry of the occupancy. Finally, we highlight that boundedness assumptions are commonly considered in the literature of MDPs/RL.
- *"Could the authors provide additional empirical analysis with more realistic, stochastic environments (beyond the simplistic toy examples), to demonstrate clearly the practical significance of their theoretical bounds?"*
1. **Simplistic environments/experiments:** While our GUMDPs comprise a small set of states, we argue they are representative of several tasks considered by previous works in the literature of convex RL/GUMDPs (we refer to Fig. 1 for a description of our GUMDPs). Also, we aimed to provide the simplest possible GUMDPs/experimental settings to stress the generality of our results, as acknowledged by reviewers 3gey and dxJG. We mainly used our empirical results to validate the theoretical results of our paper, which readily apply to GUMDPs of any dimension under the assumptions stated by each result, which we believe to be general enough to be representative of the great majority of objectives used by previous works.
2. **Experiments with stochastic transitions:** We also provide experiments with stochastic transitions in the paper (Fig. 3 (b)) and analyzed how the stochasticity of the transitions impacts our results, connecting our experimental results with the theoretical findings.
We hope our answers addressed the reviewer's main concerns.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their rebuttal. After considering the response, I have decided to maintain my original score. | null | null |
Flowing Datasets with Wasserstein over Wasserstein Gradient Flows | Accept (oral) | Summary: This work presents a novel theoretical framework for handling probability distributions over probability distributions. The authors begin by rigorously establishing the Wasserstein over Wasserstein (WoW) distance metric from a functional analysis perspective, providing a mathematical foundation for measuring distances between distributions of distributions. Building upon this metric structure, they develop a comprehensive theory of gradient flows, incorporating maximum mean discrepancy (MMD) with Sliced-Wasserstein based kernels as a tractable objective functional. The theoretical framework is extensively validated through diverse experimental scenarios. These include transformations between synthetic distributions (demonstrated through a three-ring scenario), domain adaptation tasks, dataset distillation, and transfer learning applications. To ensure accessibility and reproducibility, the authors provide detailed mathematical derivations, comprehensive background material, and thorough theoretical analyses. This includes explicit connections to optimal transport theory, probability theory, functional analysis, and variational inference, making the work both theoretically rigorous and practically applicable.
Claims And Evidence: The claims made in this submission are thoroughly supported by both theoretical foundations and empirical evidence. Specifically:
1. Theoretical Rigor:
- The mathematical framework is developed with precise definitions and rigorous derivations
- All theoretical claims are supported by formal proofs.
2. Empirical Validation:
- The experimental results comprehensively validate the theoretical framework across multiple scenarios:
* Synthetic distributions (three-ring transformation)
* Domain adaptation tasks
* Dataset distillation
* Transfer learning applications
- Each experiment provides quantitative metrics or qualitative visualizations that convincingly demonstrate the method's effectiveness
3. Technical Soundness:
- The implementation details are thoroughly documented
- The experimental protocols are clearly specified and reproducible
In conclusion, the submission maintains a high standard of scientific rigor, with all claims well-supported by both theoretical analysis and experimental results.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are well-designed and appropriate for demonstrating the effectiveness of the proposed approach.
Theoretical Claims: As an applied mathematics researcher, I have carefully examined most of the theoretical derivations and proofs in the manuscript. While the mathematical framework is generally sound and well-justified (for me), I would like to raise several technical concerns that merit further discussion:
1. Domain Specification of the Functional:
- On page 3, the authors define $\mathcal{F}:\mathcal{P}_2(\mathcal{M})\to\mathbb{R}$
- For most machine learning applications, particularly those involving distances or discrepancy measures, shouldn't the codomain be specifically $\mathbb{R}^+ \cup$ {0}?
2. Treatment of Higher-Order Terms:
- The derivations frequently approximate higher-order terms as $o$ terms
- While this is a common practice in differential geometry, its applicability to $\text{W}_{\text{W}_2}$ warrants further justification
- My understanding is that the finite second moment property of Wasserstein space might justify this approximation, but is this operation still suitable for WoW?
3. Continuity Equations in WoW Framework:
- An interesting theoretical extension would be the formulation of continuity equations within the $\text{W}_{\text{W}_2}$ framework for functional $\mathcal{F}$. Specifically, for $\mathcal{F}[\rho]$ in Wasserstein space, we may define $\frac{\partial \rho}{\partial \tau}=-\nabla\cdot(\rho\nabla\frac{\delta \mathcal{F}[\rho]}{\delta \rho})$. Can we have similar a PDE in WoW ?
Experimental Designs Or Analyses: I have examined the experimental designs and suggest two specific improvements:
1. Ring Transformation Experiments:
- Include comparisons with relevant baselines:
* KALE flow [1]
* Kernel KL divergence [2]
---
References:
1. KALE Flow: A Relaxed KL Gradient Flow for Probabilities with Disjoint Support
2. Statistical and Geometrical properties of regularized Kernel Kullback-Leibler divergence
Supplementary Material: I have reviewed almost all the supplementary material. While some of the mathematical proofs exceed my expertise and I cannot fully verify their correctness (for example appendix b.2), I find the supplementary material generally comprehensive and well-documented.
However, in the section discussing Wasserstein-Fisher-Rao, it would be worthwhile to address potential issues related to asynchronous computation. Specifically, prior works have highlighted problems where updating locations and weights separately might lead to inconsistencies [1],[2]. Including a discussion of this issue would provide a more thorough treatment of the topic.
---
References
[1]. DPVI: A Dynamic-Weight Particle-Based Variational Inference Framework
[2]. GAD-PVI : A General Accelerated Dynamic-Weight Particle-based Variational Inference Framework
Relation To Broader Scientific Literature: Yes, the key contributions of the paper are foundational and have potential implications for broader applications of the proposed approach.
Essential References Not Discussed: The paper does not currently discuss related works on semi-implicit variational inference, which are similar to the proposed approach in appendix E.2. For example:
1. Variational Bayes with Stein Mixture Inference
2. Particle Semi-Implicit Variational Inference
Other Strengths And Weaknesses: ## Strengths:
1. The paper is realted to ICML conference.
2. The experimental results are sufficient.
## Weaknesses:
1. The limitations and future research directions are not given explictly.
2. The experiments can be conducted on the recommender system scenario for a wider impact.
3. This work mainly focus on the MMD functional, the reviewer doubts that how to implement such approach for $f$-divergence, which is of great importance to design sampling algorithm.
Other Comments Or Suggestions: 1. Presentation: Including a visualization of the gradient for the deep learning backend pipeline in Section 4.1 could improve clarity and make the simulation workflow easier to understand.
2. Adding an MP4 or a GIF for three ring case could more effectively showcase the superiority of the proposed approach in the supplementary zip file.
Questions For Authors: 1. **How does the proposed method handle scenarios where the labels are not categorical, such as in transfer learning between regression tasks**?
2. Why the authors mainly consider the sliced wasserstein distance rather than the vanilla wasserstein distance or sinkhorn distance?
3. The loss curve in the `Rings.ipynb` appears to fluctuate. How can the convergence of the proposed approach be effectively validated?
4. Is it possible to extend the proposed approach to constrained support? For example, Dirichlet distribution on the manifold.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you for your appraisal and positive comments on our paper. We address your comments below.
**Domain Specification of the Functional**
For most of ML applications where we aim at minimizing distances, we agree that the codomain is $\mathbb{R}_+\cup \\{+\infty\\}$. Nonetheless, the differential structure holds also for $\mathbb{R}$ as codomain, which may cover e.g. potential energies with $V$ negative.
**Higher-Order Terms**
The $o(W_{W_2})$ notation can be defined using the Landau notation. The main difficulty when using these terms in the article is whether we can integrate them when using the Taylor expansion. This can be handled using regularity hypothesis as bounded Hessian.
**Continuity Equations in WoW Framework**
We expect a similar continuity equation $\partial\_t \mathbb{P}\_t = \mathrm{div}(\mathbb{P}\_t \nabla_{W\_{W\_2}}\mathbb{F}(\mathbb{P}\_t))$. In Proposition 3.7, we took a first step by showing that AC curves satisfy a continuity equation. However, to derive the equation of the WoW gradient flow, we would need to show that the minimizing movement scheme converges towards the right equation (see e.g. (Santambrogio, 2017)), and to define an appropriate notion of divergence operator on this space, e.g. the one proposed in (Schiavo, 2020). We leave these questions for future work.
**Comparisons with relevant baselines.**
Thank you for these suggestions. KALE flows and the flows of the KKL divergence are not designed to optimize over $\mathcal{P}(\mathcal{P}(\mathbb{R}^d))$. In Figure 4, we showed an example of the flow of the MMD with Riesz kernel, which is designed on $\mathcal{P}(\mathbb{R}^d)$ as KALE flows and the KKL divergence.
**Related works.**
Thank you for pointing to us these related works which we were not aware of.
**Limitations and future research directions.**
We will add a paragraph in the revised version of the paper indicating future research directions (e.g. using other kernels, f-divergences or continuous labels) and limitations (e.g. theory on compacts manifolds, lack of continuity equation for the flow, dimension of xps).
**This work mainly focus on the MMD functional.**
We focus on the MMD functional as it can be decomposed as a potential and an interaction energy, which we know how to differentiate in the WoW space (see Section 4.2).
Using this approach for f-divergences could be of great interest to do e.g. sampling. We plan to tackle this in future works, but we note that it is not straightforward, as we would need to identify a base measure w.r.t which the measures need to be AC (see e.g. (Schiavo, 2020)), and then compute the WoW gradient of these functionals.
**Adding a GIF for 3 ring case.**
Thank you for this suggestion. We will add https://ibb.co/5qgjhgC.
**1.Labels not categorical?**
Handling continuous labels is an interesting extension of our method. This can be viewed as conditioning with continuous labels, akin to an infinite mixture. More formally, we could define it as $\mathbb{P}=\int \delta_{\mu_y}\mathrm{d}\lambda(y)$ with $\mu_y$ the conditional distribution given the label $y$ and $\lambda$ a distribution over the labels. For a discrete distribution $\lambda$, this would recover the discrete case we use in the paper. In practice, we would need to discretize $\lambda$.
**2. Why the authors consider the SW distance rather than wasserstein or sinkhorn?**
We consider SW for two reasons:
1. It is much faster to compute than the Wasserstein or Sinkhorn distance (complexity of $O(Ln\log n)$ for SW versus $O(n^3\log n)$ for Wasserstein and $O(n^2\log n/\varepsilon^2)$ for Sinkhorn, see e.g. [1]).
2. SW allows to define valid positive definite kernels [2], which is not the case for the Wasserstein distance [1].
[1] Peyré, G., & Cuturi, M. Computational optimal transport: With applications to data science. Foundations and Trends in Machine Learning, 2019.
[2] Kolouri, S., Zou, Y., & Rohde, G. K. Sliced Wasserstein kernels for probability distributions. IEEE Conference on Computer Vision and Pattern Recognition. 2016.
**3. The loss curve in the Rings.ipynb appears to fluctuate.**
The loss curve may fluctuate as SW is estimated through a Monte-Carlo approximation. We observed empirically that for a sufficient number of iterations and with well chosen hyperparameters, it converges well and it fluctuates around small values, close to the minimum.
**4. Is it possible to extend the proposed approach to constrained support?**
The proposed flow might be combined with methods to constrain the support such as Mirror Descent, which has been recently extended to Wasserstein gradient flows in [1], or using barrier methods as in [2].
[1] Bonet, C., Uscidda, T., David, A., Aubin-Frankowski, P. C., & Korba, A. Mirror and preconditioned gradient descent in wasserstein space. NeurIPS 2024.
[2] Li, L., Liu, Q., Korba, A., Yurochkin, M., & Solomon, J. Sampling with mollified interaction energy descent. ICLR 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed and thoughtful response to my questions. I truly appreciate the effort you have invested in improving the submission and the careful consideration given to the feedback. In light of these updates, *I have reassessed the paper and decided to increase my score*, as I now believe this work makes a stronger contribution to the fields of gradient flow and data mining. Additionally, *I hope to see future work or further experiments exploring non-categorical labels*.
---
Reply to Comment 1.1.1:
Comment: Thank you for raising the score and again for your positive comments! | Summary: The paper proposes a framework for optimizing functions over probability measure spaces of probability measures. The approach is based on Wasserstein over Wasserstein gradient flows. The main contribution is a theoretical definition of this flow. The author also introduces objectives that are tractable within this framework. The approach is validated on synthetic datasets and small vision datasets for distribution flows, domain adaptation and dataset distilaltion.
Claims And Evidence: Claim 1: Flowing datasets can be represented as probability distributions over probability distributions.
Claim 2: A differential structure exists in the space of Wasserstein distances over Wasserstein distance.
These claims are supported by a rigorous mathematical foundation, utilizing optimal transport, Riemannian structures, and geodesics.
Claim 3:
This approach can be applied to distribution flows, domain adaptation, and dataset distillation.
This claim is supported by experimental results (see next section).
Methods And Evaluation Criteria: The approach is first illustrated on a synthetic dataset (Three Rings). The qualitative results in this case are convincing.
Then, the approach is validated on several tasks:
- Domain Adaptation: Qualitative results of data flowing from MNIST to other datasets are presented. A classification task is then proposed, where a network is pretrained on a dataset, and accuracy is measured as the data flows from the initial dataset to another one (with aligned classes).
- Dataset Distillation: The WoW gradient flow is used to generate a condensed dataset that maintains high classification accuracy.
- Transfer Learning: The WoW gradient flow is also applied to transfer learning, generating a condensed dataset that preserves high classification accuracy.
For all experiments, the quantitative results support the initial claims and are convincing. However, there are some limitations:
- The qualitative results (Figures) are not entirely convincing and do not clearly suggest a continuous flow from one dataset to another.
- The results are limited to very small datasets and toy problems.
Theoretical Claims: I reviewed the proofs, but I am not expert enough to verify their correctness.
Experimental Designs Or Analyses: The experimental design and analysis are well-suited to the problem, except for the limitation in problem size.
Supplementary Material: I read the supplementary material, which provides a deeper description of the background, detailed proofs, and additional examples. The authors also offer a full implementation with illustrative notebooks.
Relation To Broader Scientific Literature: The paper is well-situated within the current literature, and the authors demonstrate a strong understanding of the state of the art in the field.
Essential References Not Discussed: no.
Other Strengths And Weaknesses: Strengths:
- The paper is very rigorous and mathematically strong.
- Introducing a tractable gradient flow for probability distributions over probability distributions, even if limited to small problems, is an important contribution.
Weakness:
- The experiments are restricted to very small datasets.
Other Comments Or Suggestions: Typos :
- In this experiment, we generate sample from different datasets starting from Gaussian noise -> In this experiment, we generate **samples** from different datasets starting from Gaussian noise
Questions For Authors: How can the method be applied to larger-scale problems?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you for your appraisal and positive comments on our paper. We answer your comments below.
**The qualitative results (Figures) are not entirely convincing and do not clearly suggest a continuous flow from one dataset to another.**
We observed empirically that the flow goes very fast from the source dataset towards the neighborhood of the target dataset, but takes more time to converge towards clean images of the target dataset. This might explain why the flows do not appear very continuous, e.g. on Figure 2, as the discretization in time is linear. On Figure 9, we reported a finer discretization, which shows slightly better the behaviour.
We will also add in the zip supplementary materials the following gif showing the evolution of the rings (https://ibb.co/5qgjhgC).
**How can the method be applied to larger-scale problems?**
As the computational complexity of computing the MMD with a Sliced-Wasserstein kernel is in $O(C^2 Ln(\log n + d))$ for datasets with $C$ classes, and $n$ samples in each class, we believe that we can scale this algorithm to larger datasets with more classes and more samples by class.
The bottleneck will probably be for higher dimensional datasets. We tried on a (relatively) higher dimensional dataset CIFAR10 (see Figure 10 in Appendix D.4), and observed that the flow scales well. However, it required a lot more optimization steps to converge (150K steps for CIFAR10 against 18K steps for MNIST) with the same number of Monte-Carlo samples.
To scale to higher dimensional datasets, there could be several solutions. On one hand, one can try to get the number of projections $L$ bigger to get a better approximation of the Sliced-Wasserstein distance, or use better approximations using e.g. control variates as in [1]. We could also use faster optimization algorithms to try to accelerate the speed of convergence or other variants of the Sliced-Wasserstein distance more adapted to images, e.g. using convolution as projections as in [2,3]. We leave these investigations for future works.
We also replicated the domain adaptation experiment (Figure 3) by flowing from the dataset SVHN to CIFAR10, which are both dataset of shape 32x32x3, see https://ibb.co/C3B3Ffty. It demonstrates that it still works in moderate higher dimension. We will put it in Figure 3 in place of the KMNIST to MNIST example.
[1] Leluc, R., Dieuleveut, A., Portier, F., Segers, J., & Zhuman, A. Sliced-Wasserstein estimation with spherical harmonics as control variates. International Conference on Machine Learning (2024).
[2] Nguyen, Khai, and Nhat Ho. "Revisiting sliced Wasserstein on images: From vectorization to convolution." Advances in Neural Information Processing Systems 35 (2022).
[3] Du, C., Li, T., Pang, T., Yan, S., & Lin, M. "Nonparametric generative modeling with conditional sliced-Wasserstein flows." International Conference on Machine Learning (2023).
**Typos**
Thank you, we corrected the typo.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their precise answer, which confirm my recommendation.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response and for your positive comments! | Summary: This paper introduces a framework for optimizing functionals over probability measures of probability measures by leveraging the Riemannian structure of this space to develop Wasserstein over Wasserstein (WoW) gradient flows. It provides a theoretical foundation for these flows and a practical implementation using Forward Euler discretization. The paper also proposes a functional objective, which is based on Maximum Mean Discrepancy with a Sliced-Wasserstein kernel, enabling computationally efficient gradient flow simulation. The method is applied to dataset modeling, treating datasets as mixtures of probability distributions corresponding to label-conditional distributions.
Claims And Evidence: The claim Riemannian structure of WoW space enables gradient flow dynamics is proved in Appendix and the effectiveness of SW-based MMD for dataset flows is validated empirically.
The claim of tractable implementation lacks runtime analysis or complexity comparisons (e.g., runtime, number of projections, number of measures) with alternatives like Sinkhorn, especially for large-scale datasets when authors claim to be tractable.
Methods And Evaluation Criteria: The methodology is sound, employing well-established optimal transport techniques and MMD functionals. The evaluation is based on classification accuracy improvements, which are a reasonable measure for transfer learning and dataset distillation tasks. The use of SW kernels is suitable as SW distances avoid the O(N^2) cost of Wasserstein. However, additional baselines and comparisons with alternative dataset adaptation methods would strengthen the evaluation.
Theoretical Claims: Proposition 3.7 assumes the base space M is compact, which is restrictive for unbounded domains. The authors do not address how their results extend to non-compact cases.
The continuity equation (Eq. 3.6) relies on the strong assumption that the velocity field vt is Lipschitz.
The class alignment via 1-NN and majority vote seems heuristic.
Experimental Designs Or Analyses: No ablation study on the number of SW projections L (fixed to 100).
Comparisons are limited to OTDD and basic baselines. Recent methods like Dataset Condensation with Gradient Matching are absent from Table 1.
The paper criticizes prior work for assuming Gaussian class-conditional distributions but not including an ablation study where the target dataset has non-Gaussian class distributions.
Supplementary Material: The supplementary material includes additional experimental details, theoretical background, and ablation studies.
Relation To Broader Scientific Literature: The paper builds on a rich literature of optimal transport, gradient flows, and dataset adaptation. It extends previous works on dataset dynamics in probability space by introducing a hierarchical structure through the WoW distance. The connection to prior work on Wasserstein gradient flows and MMD functionals is well-discussed.
Essential References Not Discussed: A related work on sliced Wasserstein kernel can be useful to discuss "Unbiased Sliced Wasserstein Kernels for High-Quality Audio Captioning", Luong et al.
Other Strengths And Weaknesses: Strengths: The paper introduces a novel and theoretically grounded approach to dataset adaptation, demonstrating strong empirical results.
Weaknesses: Experiments are only on *NIST datasets. Including more datasets on natural images could make the paper stronger e.g., CIFAR10, Imagenet.
Other Comments Or Suggestions: No
Questions For Authors: Can the method scale to larger datasets beyond MNIST variants?
The experiments use fixed hyperparameters (e.g., L=500 projections for SW, \tau=0.1 step size). How sensitive is the method to choices of L, \tau, momentum , and kernel bandwidth? Are there guidelines for tuning these in practice?
The WoW gradient flow involves nested Wasserstein computations. How does the computational cost of your method scale with the number of classes and samples per class? Could you provide a comparison of efficiency (runtime and memory usage) against methods like OTDD on large-scale datasets?
The paper assumes compactness and connectivity of the manifold for theoretical results. How do these assumptions impact practical applications where data may lie on non-compact spaces?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for reading the paper and for your feedback. We answer your comments below. Please do not hesitate if you have other questions.
**The continuity equation relies on the strong assumption that the velocity field vt is Lipschitz.**
In Proposition 3.7, we show that if $(\mathbb{P}\_t)\_t$ is an absolutely continuous curve, then we have $\|v_t\|_{L^2(\mathbb{P}_t)}\le |\mathbb{P}'|(t)$ and the continuity equation. We do not assume that the velocity field is Lipschitz.
**The class alignment via 1-NN and majority vote seems heuristic.**
Another solution less heuristic to assign the classes is to compute an OT map between $\mathbb{P}=\frac{1}{C}\sum_{c=1}^C \delta_{\mu_c}$ and the target $\mathbb{Q}=\frac{1}{C}\sum_{c=1}^C \delta_{\nu_c}$ with the 2-Wasserstein distance as ground cost. We replicated the results of Figure 3 with this method, and observe that it gives similar results. We will fix this in the revision.
**Recent methods like Dataset Condensation with Gradient Matching (DC) are absent from Table 1.**
We chose to compare with methods that only optimize a distance to generate new samples, as these are the are the most comparable approaches, and we do not claim to be state of the art for dataset distillation. Thus, we compared in Table 1 with Distribution Matching (DM), introduced in [1]. Notably, [1] already compares DM with DC (see their Table 1), showing that their performances are comparable.
[1] Zhao, B., & Bilen, H. Dataset condensation with distribution matching. WACV 2023.
**Ablation study where the target dataset has non-Gaussian class distributions.**
As shown in Table 2, our method outperforms in most settings prior works that assume Gaussian class distributions, suggesting that avoiding this assumption leads to better performance on image datasets.
**Related work.**
Thank you for pointing to us this work, which seems relevant.
**Can the method scale to larger datasets beyond MNIST variants?**
Please find the answer in the response of Reviewer LUYC.
**How sensitive is the method to choices of L, tau, momentum , and kernel bandwidth?**
The method is sensitive to the kernel bandwidth when using the Gaussian SW kernel, as we showed in Figure 5 of Appendix D.2. In practice, we use the Riesz SW kernel, which does not require to tune a bandwidth.
As we use a gradient descent method, taking the step size too big will make the scheme diverge.
We use momentum to accelerate the convergence of the scheme, but we notice that it still converges when using no momentum, but slower, see Figure 11 in Appendix D.4.
In low dimension, the method is not very sensitive to the number of projections $L$ (see e.g. the Figure https://ibb.co/SX2gtntd for the ring experiment). In higher dimension such as for MNIST, it is more sensitive, and a relatively big number of projections improves the convergence as it provides a better approximation of the gradient. We will add the following Figure (https://ibb.co/pr2MwMLH) showing results for different values of $L$ for the generation of MNIST samples.
**How does the computational cost of your method scale with the number of classes and samples per class?**
In our experiments, we focused on the minimization of the MMD with a kernel based on the Sliced-Wasserstein distance, which can be computed in $O(Ln\log n)$ between $\mu_n=\frac{1}{n}\sum_{i=1}^n \delta_{x_i}$ and $\nu_n=\frac{1}{n}\sum_{j=1}^n \delta_{y_j}$. For $C$ classes with each class having $n$ samples, the MMD has therefore a total runtime complexity of $O(C^2 Ln(\log n + d))$. Moreover, note that the pairwise Sliced-Wasserstein distances can be computed in parallel.
In contrast, OTDD requires first to compure $C^2$ pairwise Wasserstein distances which has a complexity of $O(C^2 n^3 \log n)$ if they are computed between the empirical distributions, and of $O(C^2 (nd^2 + d^3))$ if they are computed using the Gaussian approximation. Then, it requires to compute a second OT problem between the $nC$ samples with cost $d\big((x,c), (x,c')\big) = \|x-x'\|^2 + W_2^2(\mu_c,\nu_{c'})$, which has a complexity in $O(n^3C^3\log(nC))$ and can be reduced to $O(n^2C^2\log(nC)/\varepsilon)$ using the entropic regularized problem.
We compared the runtime for the transfer learning experiment in https://ibb.co/4R0GtczT.
**The paper assumes compactness and connectivity of the manifold for theoretical results.**
We assume compactness and connectivity of the manifold for theoretical purpose, as we notably rely on the seminal results of [1]. Of course, this is not always the case in practical applications, and thus we acknowledge that there is still a gap here to make the theoretical derivations to hold on non-compact spaces. Nonetheless, it is reasonable to assume that the data lie on a compact space.
[1] Schiavo, L. D. A Rademacher-type theorem on L2-Wasserstein spaces over closed Riemannian manifolds. Journal of Functional Analysis, 2020.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the detailed rebuttal. I raised the score to 4 since my questions are addressed. I suggest the authors to include the discussion with reviewers in the revision.
---
Reply to Comment 1.1.1:
Comment: Thank you for raising the score. We will add the elements discussed with the reviewers in the revised version. | null | null | null | null | null | null | null | null |
Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive? | Accept (poster) | Summary: The paper investigated why we can not accurately transform the scaling laws from the general negative token likelihood to the downstream accuracy of multi-choice QA.
The main reasons are accountable for the phenomenon are 1. there is a sequence of transformation from NLL to accuracy; 2. the positive correlation of scaling up compute and wrong choice probabilities.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: No
Experimental Designs Or Analyses: Yes, the author runs through the popular LLMs with middle pertaining checkpoints and popular QA benchmarks.
It is a good experimental setting for the question they would like to explore.
Supplementary Material: Yes, experimental details and survival part.
Relation To Broader Scientific Literature: I think it is highly related to the literature on how to evaluate the LLM during training.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Paper is pretty clear and the question they tried to investigate is important.
Other Comments Or Suggestions: The paper is pretty refined. But I do think the authors complicates things in several ways.
For example, using a pdf is way more straightforward than using a cdf, people can well distinguish where there is of high density.
And also, the using of wording "the fraction of accuracy=1" I believe "accuracy" would be enough.
Questions For Authors: Since we know that the trasnformations bring the detoriation to the correlation, I would like to how how much each step contribute to this.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer Zmym for their thoughtful assessment of our work, particularly noting that our paper is "pretty clear" and addresses an "important" question with "good experimental settings."
### Quantifying Score-Compute Decorrelation Per Transformation
> Since we know that the trasnformations bring the detoriation to the correlation, I would like to how how much each step contribute to this.
This is an incisive question. While the answer depends on the model family and the benchmark, Figure 4 suggests the answer, but we can be much more quantitative. We will create a new figure with 4 subfigures, one for each of the statistics of the score-compute correlation distributions (mean, median, AUC of CDF, and negative Wasserstein distance). The x axis will be the sequence of metrics, the y axis will be the statistic’s value, and the hue will be the model family. This will show exactly how much “predictability” is lost under each transformation.
If the paper is accepted, we will include this detailed quantitative breakdown in the final version, including a new table summarizing these contributions across benchmarks.
### On Visualizing Distributions
> using a pdf is way more straightforward than using a cdf, people can well distinguish where there is of high density.
We appreciate the suggestion about using PDFs instead of CDFs. We actually explored both approaches during our analysis. While PDFs can be more intuitive in some contexts, we found that CDFs (specifically complementary CDFs) offered three advantages for our particular analysis: (1) they avoid bandwidth parameter tuning artifacts that emerged with KDE-based PDF estimation; (2) they directly visualize the tail behavior critical for our analysis; and (3) they more clearly quantify "what fraction of samples have correlations above X" - the key relationship we wanted to highlight. We'd be happy to include a comparison of both visualization approaches in an appendix if you think this would benefit readers.
### Terminology
Thank you for suggesting clearer wording. We used 'fraction of accuracy=1' to precisely distinguish between two concepts: (1) the proportion of individual samples where the model answered correctly (binary per-sample outcome), versus (2) the overall accuracy score across the dataset (continuous value between 0-1). While mathematically equivalent, maintaining this precision helps readers follow our sample-specific correlation analysis. We'll revise this wording to improve clarity while preserving this distinction.
### Request for Additional Constructive Criticism
We value your assessment and would appreciate any additional specific suggestions that might strengthen the paper. In particular, are there specific analyses or clarifications about the transformation process that would make our findings more compelling or accessible to the broader ML community?
We appreciate your 'weak accept' assessment and are committed to strengthening the paper to earn a more enthusiastic endorsement. We believe the quantitative breakdown of correlation degradation across transformations will substantially enhance the paper's contributions and clarity.
Thank you again for your constructive engagement with our work. | Summary: This paper explores why predicting the downstream performance of advanced AI systems, especially on multiple-choice question-answering benchmarks, is difficult despite well-understood scaling laws during pre-training. The key finding is that downstream performance degrades as it involves comparing the correct choice to a small set of incorrect choices, making it harder to predict how probability mass shifts with scale. They highlight that to predict downstream capabilities accurately, it’s essential to account for how probability mass fluctuates between correct and incorrect choices as models scale. The study suggests that scaling laws for incorrect choices may be achievable.
## update after rebuttal
My main concern still stands. I believe the study of "unpredictable" scaling laws is narrow scoped towards MCQ benchmarks. And even there the finding is rather superficial: probability mass on incorrect samples increases unpredictably. It is unclear why this happens. The authors agree, that eventually if we scale enough then this should not be a concern since pre-training loss is a consistent objective. If so, then is that point (where we enough data to predictably scale performance on MCQ) itself predictable? Many questions like these remain unanswered. Because of this, I am leaning towards a weak reject, **though after reading other reviews, I am willing to raise my score by 0.5 to 2.5, and would not be opposed to accepting this paper if other reviewers are willing to champion it**.
Claims And Evidence: - There are some scaling-predictable quantities (from parameters, data, compute), like log-prob over vocabulary. But, these under go transformations, that make the scaling predictable quantities less predictable.
- Downstream performance depends a lot on the probability mass assigned to the incorrect choice, with scale. Suggests that we need to model external information different from scale-predictable quantities like log prob.
- Continuous metrics like Brier score are insufficient to predict downstream performance.
Methods And Evaluation Criteria: Yes, the paper uses some common benchmarks and model families (though it does not cover Llama or Gemma model families which are common open source models).
- The paper conducts scaling analysis on a comprehensive set of model families with multiple models (scaling data, parameters and compute), including Pythia, Cerebras-GPT, OLMo, INCITE, LLM360.
- The work only evaluates downstream performance on NLP benchmarks, for which it uses: ARC, HellaSwag, MathQA, MCTACO, MMLU, OpenbookQA, PIQA, RACE, SciQ, SIQA, WinoGrande and XWinoGrad En.
- The performance is measured in terms of accuracy, Brier score, and probability mass on the correct choice.
Theoretical Claims: There are no theoretical claims in the paper.
Experimental Designs Or Analyses: Their experimental protocol is stated cleanly -- I really like their Figure 3, which presents the degradation of predictive power on the ARC benchmark. In general, they follow the following protocol.
- The authors compute the correlation between score and compute, as we scale compute in a given model family.
- The score metric is one of the following: p(correct choice | vocab), p(correct choice | given other choices), accuracy, brier score.
- Then, they plot the distribution of this correlation across samples.
- Ideally, this distribution should be concentrated at values close to 1.0.
- But, they find that everytime some transformation is applied on the log-probs predicted by the model, the correlation degrades, and the distribution of this correlation shifts away from 1.0.
They find that the main reason for the above degradation in correlation is the probability mass on incorrect choices.
Supplementary Material: Yes, I skimmed through their results on score-compute correlation metrics in Appendix G, and benchmark-level score-compute correlation CDFs in Appendix H. Largely, they agree with the results in the main paper.
Relation To Broader Scientific Literature: The paper talks about predictable downstream performance, as a function of scaling data, parameters and compute for pre-training. Existing analyses in this space has been centered on predicting loss (NLL) on test-data, either in data rich or data constrained settings. From that perspective, this analyses is novel, but the concern is that it is too narrow (only focused on MCQ), and only presents some concerns, as opposed to also presenting practically implementable fixes to make the downstream performance more predictable.
Essential References Not Discussed: - Language Models (Mostly) Know What They Know (Kadavath et. al.), also conducts an analysis of performance and calibration on multiple choice questions benchmarks, albeit they don't focus on predictable scaling.
Other Strengths And Weaknesses: Strengths:
- The analyses of downstream prediction capabilities on MCQ benchmarks is quite comprehensive, and it gets at the main cause: the probability mass on incorrect outputs varies in unpredictable ways as we scale compute.
- The story is presented in a clean way, and the paper is easy to read.
Weaknesses:
- The analyses and findings are very heavily focussed on MCQ benchmarks. It is unclear why or how the predictability scales on open-ended QA benchmarks, or reasoning problems.
- It mentions that the problem mainly stems from unpredictable movement of probability mass on the incorrect options, but does not discuss any ways in which this can be fixed, or is this a fundamental limitation?
- Intuitively, one might imagine that the probability mass on all incorrect samples should reduce monotonically with scale, so why does it increase on some, and decrease on others? The paper does not explain this. An investigation of this underlying cause seems needed to really understand if the lack of predictability is fundamental, even for MCQ benchmarks.
In general, I feel that the analysis is interesting. It mechanistically and systematically explains why the correlation degrades but somehow leaves the reader wanting by the end of Section 6, at which point it still remains unclear why the probability on incorrect samples has a high and un-predictable sensitivity with scale. Also, it is unclear how to transfer the analysis or insights to non MCQ benchmarks (I did look at the discussion in Appendix B which mainly defers discussion to future work). Because of these two points, I am leaning towards a weak reject, but if the authors can add additional discussion on these points, I would be happy to re-consider my score.
Other Comments Or Suggestions: - The plot labels (result plots) are very small and hard to read.
Questions For Authors: - In Figure 6, do the trends change depending on the number of incorrect options present for each question. For example, if there are more incorrect options, then is the join probability distribution less correlated?
- How is the analysis affected by the relationship between correct and incorrect options? For example, if the incorrect answers are close negatives, vs. being very different negatives.
- For True/False questions, is the scaling more predictable?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely thank you for your thorough and thoughtful review. We greatly appreciate that you found our analysis "interesting," "quite comprehensive," and that it "gets at the main cause" of unpredictability in downstream capabilities. We're particularly pleased that you found our story "presented in a clean way" and that the "paper is easy to read." Thank you also for highlighting Figure 3 as effectively demonstrating the degradation of predictive power on the ARC benchmark.
### On Specificity to Multiple Choice Question Answering (MCQA) Benchmarks
We agree that our focus on MCQA benchmarks represents a scope limitation. We are upfront about this but made this decision for several reasons:
1. MCQA benchmarks remain a widely used evaluation format for frontier models due to their objective scoring and standardization. They're employed by major benchmarking efforts (e.g., MMLU, HELM) and by leading labs to track progress.
2. The MCQA format is particularly valuable for investigating scaling predictability because it allows us to isolate specific transformation steps in a controlled manner. This methodological clarity helped us identify the core mechanism behind unpredictability.
3. We view our manuscript as contributing to the science of scaling-predictable evaluations. While any given task or type of task will have its quirks, our paper shows how one can dive into the mechanisms of apparent unpredictability For example, new work on studying predictable scaling behavior of inference-time compute (https://arxiv.org/abs/2502.17578) also applied a per-sample analysis, albeit in a different context. We will add a citation to this work and clarify how analyses like ours can be applied to generative evaluations.
### Fundamental Limitation or Temporary Obstacle
> It mentions that the problem mainly stems from unpredictable movement of probability mass on the incorrect options, but does not discuss any ways in which this can be fixed, or is this a fundamental limitation?
This is certainly not a fundamental limitation and we have subsequent work showing how to overcome this limitation by predicting how probability mass changes on incorrect choices. However, demonstrating and dissecting the problem thoroughly is itself already quite long (as you can tell by our manuscript). We will move our Future Directions section into the main text if accepted.
### Clarification of How Mass Changes on Incorrect Choices
> Intuitively, one might imagine that the probability mass on all incorrect samples should reduce monotonically with scale...
This was our belief initially, but probability mass on incorrect choices almost always increases with compute, at least in the models we were able to study. For example, suppose we have a question about what pet a child has. As models train, (1) they place more mass on syntactically valid words and phrases, so if an incorrect option is "airplane", its mass will still increase relative to a randomly initialized model, even though airplane isn't a valid pet ; (2) they place more mass on semantically plausible alternatives e.g., "cat" can be a likely pet, even if it isn't the correct option for this question; (3) they learn the nature of MCQA tasks and place more mass on the available options regardless of content.
At some point, probability mass on incorrect choices _must_ decrease, but it seems the models we studied are not close to this tipping point.
### Responses to Specific Questions
> In Figure 6, do the trends change depending on the number of incorrect options present for each question?
> For True/False questions, is the scaling more predictable?
This is an excellent question. Most widely-used MCQA NLP benchmarks have 4 options. MMLU-Pro (https://github.com/TIGER-AI-Lab/MMLU-Pro) but offers 4-10 choices per question, but was publicly released after our data were collected. We expect that if there are more incorrect options, scaling is less predictable and if there are fewer options (e.g., True/False), scaling would be more predictable. We will add this to our Discussion.
### Visual Presentation Improvements
We appreciate your feedback about plot label sizes. We will increase font sizes in all figures.
### Conclusion
We believe addressing these concerns will substantially strengthen the paper and hope they address your "weak reject" assessment. Thank you again for your constructive feedback, which has genuinely helped us improve our work.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I am still unsure about the intuition behind why probability mass increase on the incorrect response is the main cause behind the scaling being hard to predict. Since, log-loss is a consistent objective, as we minimize pre-training loss for the next-token prediction objective, it should be the case, that the most likely option based on the contexts seen in the pre-training data should be preferred (over the other MCQ options). If the most likely option is incorrect, then there is task mismatch between pre-training and fine-tuning task, and that seems to be a distribution shift problem. While it is plausible (as shown in the paper) that probability mass increases on the negative samples, but if we train long enough, then the mass on the most likely (and correct if task matches pre-training) sample should still be higher. For example, "airplane" may a viable option for a "pet" and may have some mass increase on this option, but this should be smaller than "cat" or "dog" eventually (at least relatively).
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer M3MH,
Thank you for your follow-up comment and for engaging deeply with our work.
Your intuition that minimizing pre-training loss should **eventually** favor the most likely (and hopefully correct) continuation is correct in the context of the next-token prediction task over the full vocabulary. Indeed, as our paper shows, the loss and the log-probability of the correct choice when considered over the entire vocabulary, $p_{\theta}^{Vocab}(\text{Correct Choice})$, does correlate strongly and predictably with scale.
However, **this description is only applicable after one scales up enough. Our paper studies: what happens before that point?** That is precisely the realm of scaling studies. In the context of multiple choice question answering (MCQA), performance on these benchmarks depends on the probability of the correct answer and also on the probability on a small, specific set of incorrect distractors provided within the question. Before one scales up enough, probability mass fluctuations on these specific incorrect distractors affect the model family's performance.
Our empirical results across various model families and scales show that this unpredictability arising from incorrect choice probabilities is a tangible issue within current and near-term scaling regimes, especially for smaller models on scaling ladders.
We hope this clarifies why the behavior of probability mass on incorrect choices is central to the challenge of predicting downstream MCQA performance, even given the consistency of the pre-training objective.
Best regards,
The Authors | Summary: The paper studies the relationship between model scale and downstream task performance, to understand why it has been hard to formulate a "scaling law" to describe the relationship unlike known results for pretraining performance.
The paper conducts extensive empirical analyses on many tasks, benchmark datasets, model families and compute scales, to derive a comprehensive picture about their relationship to task performance.
The paper proposes an interesting explanation when tasks are specialized to multiple choice questions (MCQ) and back it with their analyses, that there are transformations going from pretraining loss to downstream MCQ performance that systematically degrade the statistical relationship between the two.
The paper also conjectures that measuring and accounting for the log likelihoods of incorrect answers in MCQs may yield a more stable relationship between model scale and task performance.
Claims And Evidence: There are 3 main claims in the paper.
1. Downstream task performance does not have a predictable relationship with the scale of the model, whereas pretraining loss does. The claim can be scoped better by emphasizing that the downstream tasks are of multiple-choice question type. The evidence for the claim can be strengthened by including a plot showing task performance vs model scale (and contrasting it with the plots in the related literature for pretraining scaling laws).
2. As we go from the pretraining loss to the MCQ task objective, each transformation step weakens the statistical correlations with model scale. The evidence contributed by the paper for this claim is very comprehensive!
3. The variations in probability of incorrect choices do not "cancel out" when averaging across datapoints in a dataset. And modeling those probabilities (at a per sample granularity) is key to better prediction of downstream performance metrics as a function of model scale. The evidence for this claim is indirect at best. More direct lines of evidence may be to control for the probability of incorrect choices, and check whether there is a more predictable relationship between task performance and model scale after controlling for confounding.
Methods And Evaluation Criteria: The evaluation methods are sound.
Theoretical Claims: There are no theoretical claims in the paper. There is a broad claim about how aggregating the incorrect choice probabilities across independent realizations does not lead to a less noisy estimate (unlike in Monte Carlo estimation); this may be good to justify theoretically.
Experimental Designs Or Analyses: The experiment design is sound, and quite an accomplishment in how thorough the empirical results are reported.
Supplementary Material: I reviewed Appendices A-F (i.e. not the additional Figures 7 - 79). The related work appendix was especially helpful to situate the paper in the right context.
Relation To Broader Scientific Literature: The related work section in Appendix is good; I wondered if some abridged version of it could be moved to the main paper (perhaps streamlining Sections 1 and 2 in the process).
Essential References Not Discussed: N.A.
Other Strengths And Weaknesses: N.A.
Other Comments Or Suggestions: Fig 5 center: The comment about "0 < P_choices(correct choice) < 0.5 contains little information about accuracy" is a little inscrutable. The plot shows not much dispersion at all for "Fraction of Accuracy=1", unlike Fig 5 left. I can see two explanations: one is that to understand the claim we need to look at the composition of Fig 5 left and Fig 5 center, and realize that a large dispersion in Fig 5 left has large downstream effect. The other explanation is that Accuracy is a binary variable (when measured per datum); there are only 2 values for accuracy and each accuracy value maps to a large spread of P_choices(correct choice) values.
Questions For Authors: N.A.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | null | null | null | null | null | null | null | null | |
QuEST: Stable Training of LLMs with 1-Bit Weights and Activations | Accept (poster) | Summary: The paper presents QuEST, a quantization-aware training (QAT) method that enables training LLMs with extremely low-precision weights and activations (both down to 1-bit). The key contributions are: (1) Hadamard normalization with MSE-optimal fitting for quantization, and (2) a trust gradient estimator that minimizes the difference between quantized and full-precision gradients. The method claims to achieve Pareto-competitive performance with FP16 while using 4-bit or lower precision.
## update after rebuttal
After reviewing the authors' rebuttal, I maintain my weak reject recommendation. While the authors made efforts to address several concerns, limitations remain:
1. Scale and Evaluation:
- The authors' testing remains limited to relatively small models (up to 800M parameters, with a mentioned 1.2B test). While they claim to be working on 3B, this still falls short of demonstrating effectiveness at production-relevant scales (7B+).
- The authors did not address my concern about performance on reasoning tasks like GSM8k or AIME, only showing basic zero-shot evaluations on simpler tasks like HellaSwag.
2. QAT vs PTQ Comparison:
- The authors' comparison with PTQ methods at small scales (800M) is methodologically questionable, as PTQ methods typically show better performance with increased model scale due to greater parameter redundancy.
- While they provided some results from QuaRot at 70B scale, this comparison doesn't fully address the fundamental differences between QAT and PTQ at scale.
- Most concerning is the computational cost: modern PTQ methods can quantize 70B models in about an hour on a single A100, while the authors quote 1600 A100 hours for just training an 800M model with QuEST, and estimate 40,000 hours for a 3B model. This huge difference in resource requirements makes QuEST far less competitive compared to a PTQ+SFT flow in practical scenarios.
3. Technical Novelty:
- The use of Hadamard Transform (HT) for improving gradient flow isn't as novel as presented - though not directly equivalent, similar techniques appeared in FAT from 4 years ago.
- QuaRot, a well known PTQ scheme, also uses Hadamard transform in both weights and activations. Thus I won't see Hadamard transform as a fundamental novelty in QuEST.
4. Broader Impact:
- Without demonstrating effectiveness on larger models and more challenging tasks (especially reasoning tasks), the practical impact of this work remains uncertain.
- The authors' rebuttal suggests that reasoning tasks are "outside the reach of 1B-parameter models without explicit instruction fine-tuning," but this sidesteps the important question of how their method performs after fine-tuning.
While the paper presents some interesting technical contributions, these fundamental limitations in scale, evaluation breadth, novelty, and especially the prohibitive computational requirements make it difficult to recommend for acceptance at ICML. The work would benefit from more comprehensive large-scale evaluations, clearer differentiation from existing approaches, and better justification for its high computational costs versus existing PTQ solutions.
## Additional post-rebuttal comments
After further examining the materials, I would like to bring up two further issues which I believe will become viewable to the authors for improving their work:
### 1. Mischaracterization of FAT vs. QuEST
The authors claim their use of Hadamard Transform (HT) is new & fundamentally different from FAT, but this is not accurate. While there are implementation differences, both methods transform weight representations through frequency domains to improve quantization:
- In FAT (Fig. 3 and Supplementary Section 2.2), gradient flow explicitly passes through the Fourier domain during backpropagation with ∂Wt/∂W being a function of frequency components
- Similarly, QuEST employs the Hadamard domain for gradient flow, and their "trust estimator" also operates in this transformed space
- Both approaches use masking technique and both aim to achieve the same goal: improving gradient estimation by leveraging frequency-based transformations
The core innovation of "transforming to a more quantization-friendly domain" is shared between these approaches, though applied in different contexts.
### 2. Critical limitations in practical applicability
The authors are encouraged to check out the recent "Quantization Hurts Reasoning" paper (https://arxiv.org/abs/2504.04823) which demonstrates why evaluating reasoning capability at low bitwidth is crucial for modern LLMs. This paper shows:
- Severe degradation in complex reasoning tasks (e.g., AIME) when quantizing below 8 bits
- Larger models (32B-70B) fare significantly better than smaller ones under aggressive quantization
- Models' origins (distilled vs. RL-trained) substantially impact quantization tolerance
QuEST's failure to demonstrate effectiveness beyond toy-scale models (800M-1.2B) leaves its practicality questionable for the most important use case: enabling efficient inference of large reasoning models (7B+). The computational cost of QuEST training (40,000 GPU hours estimated for just a 3B model) presents a prohibitive barrier to practical adoption.
This makes QuEST's contribution largely theoretical rather than practical, especially when PTQ methods can quantize 70B models in hours (my experience is <1hr) with acceptable reasoning performance degradation.
Claims And Evidence: The claims are partially supported by evidence, but with several limitations:
- The experiments only cover relatively small models (30M-800M parameters). The authors didn't provide justification of not going to higher sizes.
- Testing is limited to C4 dataset only, making all conclusions restrictive, e.g., lacking reasoning tasks like GSM8k or AIME performance
- Improvements over baselines (PACT, LSQ) are modest (e.g., ~2.6 point improvement in W1A1 configuration)
- The paper claims GPU speedups for 7B models but only projects them without actual training
Methods And Evaluation Criteria: The methods are sound but narrow in scope:
- Use of C4 dataset alone is limiting
- Model sizes tested (up to 800M) are not aligned with current community standards
- Baseline comparisons (PACT, LSQ from 2018-2020) are quite outdated
- Primary metrics (perplexity, validation loss) need broader validation
Theoretical Claims: The theoretical framework appears sound, but several clarifications are needed:
- The derivation of trust estimation is mathematically correct
- The paper should clarify if α in equation (1) is per-output-channel scaling
- The relationship between HT and gradient estimation could be better explained
Experimental Designs Or Analyses: Experimental results appear valid. Can you explain why in Fig. 1 the W3A3 and W4A4 curves overlap?
Supplementary Material: Yes, codes.
Relation To Broader Scientific Literature: Several critical omissions in literature discussion:
- Should discuss relationship to recent work on precision scaling laws (Kumar et al., 2024)
- Should compare against recent work on numerical precision effects (Feng et al., 2024)
- Missing comparison with state-of-the-art PTQ work like ARB-LLM
- The meaning of "QuEST" is never explained
See below for more details.
Essential References Not Discussed: Several critical missing references and comparisons:
1. Recent theoretical works on precision:
- "Scaling Laws for Precision" (https://arxiv.org/abs/2411.04330) - Shows fundamental relationships between model precision and performance
- "How Numerical Precision Affects Mathematical Reasoning Capabilities of LLMs" (https://arxiv.org/abs/2410.13857) - Demonstrates theoretical limitations of low-precision training
2. Recent PTQ advances:
- "ARB-LLM: Alternating Refined Binarizations for Large Language Models" (https://github.com/zhitengli/arb-llm) - Achieves strong results with binary weights
- "QuaRot: Outlier-Free 4-Bit Inference in Rotated LLMs" (https://arxiv.org/abs/2404.00456) - Successfully applies Hadamard transform in PTQ setting
The omission of these works is particularly concerning because:
- The use of Hadamard transform (HT) in QuEST is presented as novel, but QuaRot has already demonstrated its effectiveness in the more challenging PTQ setting, achieving strong results on much larger models (LLaMA2-70B). Their success in PTQ makes HT's effectiveness in QAT unsurprising.
- Recent developments in HT optimization (as shown in PyTorch's HadaCore) demonstrate that the technique is becoming standard in quantization workflows. The paper should acknowledge this broader context.
- The theoretical framework from Kumar et al. and Feng et al. provides important context about fundamental precision-performance trade-offs that should inform any QAT method.
Other Strengths And Weaknesses: Strengths:
- Novel integration of Hadamard Transform in QAT
- Stable training achieved at extreme low precision (1-bit)
- Practical GPU implementation with demonstrated speedups
- Clear theoretical framework for trust estimation
Weaknesses:
- Limited model scale (only up to 800M parameters)
- Narrow experimental validation (single dataset)
- Marginal improvements over baselines
- Some figures need better explanation (e.g., overlapping W3A3 and W4A4 curves in Figure 1)
Other Comments Or Suggestions: - Editorial issues (repeated definitions of QAT, STE, etc., typos like "Gaussiant")
- While the paper presents interesting ideas and achieves stable low-precision training, the limited scope of evaluation, modest improvements, and several important missing comparisons suggest the work needs substantial revision.
Questions For Authors: QUESTIONS FOR AUTHORS:
- Why is the method named QuEST?
- Can you explain the overlapping W3A3 and W4A4 curves in Figure 1?
- What challenges do you anticipate in scaling to larger models (7B+)?
- How does your method compare with recent works like ARB-LLM and QuaRot?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: > Why is the method named QuEST?
We apologize for the omission, QuEST stands for **Qu**antized, **E**fficient, and **S**table **T**raining.
> Can you explain the overlapping W3A3 and W4A4 curves in Figure 1?
The decrease in training precision corresponds to fixed parameter count models shifting to the left. The performance degradation due to increased compression increases the loss and shifts the points upward. Between W3A3 and W4A4, those effects cancel out, landing the points on seemingly “the same” loss-to-model-size curve. In Section 4.4, we presented the “precision efficiency (eff(P))” metric to quantify this trade-off and accurately determine the optimal precision to be W4A4.
> What challenges do you anticipate in scaling to larger models (7B+)?
We did not observe any training instabilities or unexpected performance degradation when scaling the model size. As such, the main challenge we anticipate is the need to commit hundreds of thousands of GPU-hours to test this novel approach for a production-scale pre-training run.
> Comparison with recent works like ARB-LLM, QuaRot, and Kumar et al. (2024).
We did not include a comparison to PTQ methods (QuaRot or ARB-LLM) since they are not competitive with QAT: many PTQ methods work in one-shot, whereas QAT methods perform training or re-training by definition.
To fully address this concern, below we present a comparison of C4 validation PPL of the 800M models trained by us using BF16, QuEST INT4, and QuEST INT8. We then show numbers obtained relative to QuaRot PTQ (as suggested by the reviewer), and round-to-nearest quantization (RTN):
| BF16 | QuEST W4A4 | RTN W4A4 | QuaRot W4A4 | RTN W8A8 | QuaRot W8A8 |
|------|------------|-----------|-------------|-----------|------------|
| 11.72 | 12.12 | 53.73 | 46.85 | 12.60 | 12.59 |
These results clearly show that PTQ isn’t competitive with QAT. Specifically, notice that our W4A4 model is _more accurate_ than the QuaRot model in W8A8.
To your question, ARB-LLM focuses on the simpler problem of weight-only quantization which is not the main focus of our work, whereas Feng et al. (2024) focuses on PTQ. As such, we largely omitted these comparisons from the paper.
We will provide additional background citations and clarification on this topic in the next version of the paper.
Please see the answer to [Reviewer HH36](https://openreview.net/forum?id=I0Ux2nAN6u¬eId=h10Gvw440m) for the relationship with Kumar et al. (2024).
> Experiments only cover small models (30M-800M parameters).
Notably, the largest models we trained (800M) closely match the size of the _Meta Llama-3.2-1B_ model (up to a smaller embedding layer). This puts us at the lower end of current standards in terms of model sizes. Importantly, our study considers slightly larger models than the concurrent work of Kumar et al. (2024).
Runs on the 800M model require around \~1600 A100 GPU hours, with the total experimental cost of our submission being of around 12000 GPU hours. This makes it hard to scale further in an academic setup. Nevertheless, we have confirmed out results at 1.2B-parameter scale in the answer to [Reviewer HH36](https://openreview.net/forum?id=I0Ux2nAN6u¬eId=h10Gvw440m), and are working towards a 3B-parameter run on 300B tokens (\~40,000 A100 GPU hours).
We hope the reviewer can appreciate that this is very computationally-intensive and does not fit within the scope of the rebuttal.
> Testing is limited to C4 dataset only, making all conclusions restrictive, e.g., lacking reasoning tasks like GSM8k or AIME performance.
Please note that our testing is not limited to C4: Appendix A.3 included the 0-shot evaluations of some of the models. To better address this issue, we present additional 0-shot evaluations (HellaSwag, ARC-Challenge, ARC-Easy, PiQA, Winogrande) of a broader set of models we trained:
https://github.com/QuEST2025/speedup/blob/main/zero-shots.md
These results are again consistent with the C4 evaluations. As for the reasoning tasks like GSM8k or AIME, they are outside the reach of 1B-parameter models without an explicit instruction fine-tuning phase.
> The success of the Hadamard transform in PTQ makes HT's effectiveness in QAT unsurprising.
**Regarding the Hadamard Transform (HT):** We emphasize that the context in which we analyze HT is different from all the mentioned PTQ methods. Specifically, PTQ methods utilize the Hadamard transform to mitigate outliers in model weights and activations, and obtain better quantization grid fit.
In addition to this, we show novel effects of HT on:
1. Improving gradient estimator alignment, as discussed in Section 3.2 (line 205)
2. Circumventing the “dead weight problem” in gradient masking, as discussed in Section 3.3 (line 215) and Appendix A.1, by making sure that all weights get gradient.
These effects are unique to QAT; to the best of our knowledge, we are the first to explore them.
---
Rebuttal Comment 1.1:
Comment: While I appreciate the authors' response, most concerns are not quite addressed. For one thing, comparing QAT & PTQ in the small model size regime is unfair, as PTQ performs better & better as model size scales up with more parameter redundancy and higher tolerance to quantization. Second, using frequency-domain transform (HT practically does this) for better gradient flow isn't new, e,g,. "FAT: Learning Low-Bitwidth Parametric Representation via Frequency-Aware Transformation" from 4 years back was already doing this, though not in the context of LLM.
---
Reply to Comment 1.1.1:
Comment: Thank you for the opportunity to address your remaining concerns.
> 1. Comparing QAT & PTQ in the small model size regime is unfair, as PTQ performs better & better as model size scales up.
We apologize for the misunderstanding: we interpreted your first request as asking us to directly compare PTQ with QAT in our setting, and since the Kumar et al. reference you pointed to does in fact do this: they apply PTQ to models from 30M and 220M in their Figure 1.
We do agree that this comparison is not very meaningful on small models.
To address the substance of your question, we examine the performance of state-of-the-art weights and activations (W&A) PTQ methods **at scale**, and compare it with QuEST:
- The state-of-the-art PTQ methods are QuaRot (NeurIPS24), and SpinQuant (to appear in ICLR25). Focusing on SpinQuant, we observe that 4-bit W&A quantization is still far from lossless, even at 70B scale. Please see Table 5 at https://arxiv.org/pdf/2405.16406, showing a significant 4.4 avg. 0-shot drop for Llama-3 70B. We would expect 2-bits or below to provide terrible recovery with PTQ. We believe that the key compression difficulty addressed by QAT is the quantization of model activations, containing massive outliers in LLMs.
- Broadly, the results of Kumar et al., reproduced by https://arxiv.org/pdf/2411.17691, suggest that PTQ methods become _worse_ as the training tokens increase. QAT is not affected by this; in fact, our scaling laws suggest that QAT becomes better as toks/params increase (see Appendix C.2).
We hope this addresses your concern. We include further relevant comparisons on point 3 below.
> 2. Second, using frequency-domain transform for better gradient flow isn't new, e,g,. FAT is already doing this.
We thank you for raising this interesting reference. Having examined it thoroughly, we respectfully point out that there are major differences between our results, what the authors propose, and your characterization of their results:
1. First, the FAT approach is different from ours: please see their Figure 3 and Sec 3.3. What they do is **a) transform the weights via DCT, b) perform a parametrized filtering over the weights, c) transform back the filtered weights, and then finally d) quantize the weights in the standard domain**. At inference time, the filtering and transform components are dropped from the model. Moreover, they do not perform any kind of filtering over activations, as this would be prohibitive at runtime.
2. By contrast, in QuEST: we a) transform **both weights and activations** into Hadamard domain; b) we perform distribution-matching, clipping, and quantization **for both weights and activations, in the Hadamard domain**; c) **the gradient flow “switches” between domains**.
We hope it is clear that the two approaches are different: FAT is clearly developed with CNN filters in mind, and it is not obvious to us how it would be applied to LLMs. **Irrespective of this, we hope the reviewer can agree that FAT does not address activation quantization, critical in LLMs, at all: they simply do RTN on activations.**
3. Finally, we try to address a key concern from the original review:
> modest improvements, and missing comparisons
We performed a comparison between QuEST and:
* **STE** (BNN, Courbariaux et al.)
* Hadamard + STE, i.e. **QuaRot** with a backward pass (but without our fitting and trust factors)
* Activation-only quantization via STE (**AO STE**), which is an extremely generous upper bound on the performance of _FAT_: we have zero error _on the weights_, and apply STE on the activations, as they do.
The results are provided below, for 30M and 50M models.
| Model size | Method | W4A4 | W3A3 | W2A2 | W1A1 |
|-|-|-|-|-|-|
| 30M | STE | 3.792 | 4.449 | 4.793 | 5.256 |
| 30M | AO STE | 3.658 | 4.181 | 4.549 | 5.004 |
| 30M | QuaRot | 3.338 | 3.612 | 4.481 | 4.932 |
| 30M | QuEST | 3.272 | 3.372 | 3.574 | 3.945 |
| 50M | STE | 4.040 | 4.542 | 5.162 | 6.867 |
| 50M | AO STE | 3.733 | 4.315 | 4.601 | 4.985 |
| 50M | QuaRot | 3.201 | 3.695 | 4.566 | 5.007 |
| 50M | QuEST | 3.135 | 3.226 | 3.441 | 3.791 |
C4 Val Loss, D/N=100
To more precisely gauge **the scale of improvements** we introduce a **iso-loss size improvement** metric. A better QAT method will need a smaller model size to reach the same loss. For each QAT method X, from its scaling law we compute the size of the model we would need to train **using QuEST** in order to match the same loss as X.
|bitwidth|Eff(P)||||QuEST Size Reduction|||
|-|-|-|-|-|-|-|-|
||QuEST|LSQ|QuaRot|AO STE|Over LSQ|Over QuaRot|Over AO STE|
|W4A4|0.69|0.56|0.48|0.09|19%|31%|87%|
|W3A3|0.43|0.32|0.11|0.01|25%|74%|98%|
|W2A2|0.15|0.12|0.00|0.00|21%|98%|99%|
We believe this makes it evident that 1) QuEST is significantly superior to prior methods, and 2) for QuaRot and STE, this advantage increases as we decrease target precision. | Summary: In this paper, the authors propose QuEST, a low-bit quantization-aware training (QAT) method aimed at compressing models more accurately. Experiments demonstrate that QuEST outperforms the LSQ method in performance under various low-bit weight and activation quantization scenarios. Additionally, based on the designed INT4 kernel, the Linear computation per layer shows a better speedup ratio compared to BF16.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes. The experimental design is not very reasonable.
In the speedup ratio experiments, the paper only compares QuEST with BF16, which is insufficient to comprehensively evaluate the effectiveness of the method. Additionally, tests should be conducted with different sequence lengths and batch sizes to observe the performance variations of QuEST under different scenarios. The current experimental analysis remains incomplete, and more comparative experiments are needed to support its efficiency claims.
Supplementary Material: Yes. All.
Relation To Broader Scientific Literature: None
Essential References Not Discussed: None
Other Strengths And Weaknesses: 1. In Section 4.3, Scaling Laws are mentioned, and related experiments are conducted in Section 4.6. The stable Scaling Laws indicate that the QuEST method exhibits strong generalization capabilities across models of different scales, meaning its performance improvement patterns remain consistent across LLMs of varying sizes. This experiment may suggest that QuEST can still work effectively on larger models, but the paper does not provide a detailed discussion on how its advantages manifest in such scenarios.
2. In Section 5, Kernel Overview, the paper mentions that the third stage utilizes an enhanced CUTLASS and optimizes GEMM operations, but it does not elaborate on the specific implementation. From an optimization perspective, this may involve the following aspects:
GEMM kernel optimization: Potentially using more efficient INT4 computation layouts to improve Tensor Core utilization.
Memory access optimization: Possibly reducing data movement and optimizing shared memory loading to enhance throughput.
Parallel computation optimization: Potentially improving warp-level scheduling or data flow optimization to boost computational efficiency. However, the paper does not provide detailed explanations of these optimizations, which need further clarification.
3. In the speedup ratio experiments, the paper only compares QuEST with BF16, which is insufficient to comprehensively evaluate the effectiveness of the method. Additionally, tests should be conducted with different sequence lengths and batch sizes to observe the performance variations of QuEST under different scenarios. The current experimental analysis remains incomplete, and more comparative experiments are needed to support its efficiency claims.
Other Comments Or Suggestions: None
Questions For Authors: 1. In Section 4.3, Scaling Laws are mentioned, and related experiments are conducted in Section 4.6. The stable Scaling Laws indicate that the QuEST method exhibits strong generalization capabilities across models of different scales, meaning its performance improvement patterns remain consistent across LLMs of varying sizes. This experiment may suggest that QuEST can still work effectively on larger models, but the paper does not provide a detailed discussion on how its advantages manifest in such scenarios.
2. In Section 5, Kernel Overview, the paper mentions that the third stage utilizes an enhanced CUTLASS and optimizes GEMM operations, but it does not elaborate on the specific implementation. From an optimization perspective, this may involve the following aspects:
GEMM kernel optimization: Potentially using more efficient INT4 computation layouts to improve Tensor Core utilization.
Memory access optimization: Possibly reducing data movement and optimizing shared memory loading to enhance throughput.
Parallel computation optimization: Potentially improving warp-level scheduling or data flow optimization to boost computational efficiency. However, the paper does not provide detailed explanations of these optimizations, which need further clarification.
3. In the speedup ratio experiments, the paper only compares QuEST with BF16, which is insufficient to comprehensively evaluate the effectiveness of the method. Additionally, tests should be conducted with different sequence lengths and batch sizes to observe the performance variations of QuEST under different scenarios. The current experimental analysis remains incomplete, and more comparative experiments are needed to support its efficiency claims.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: > Detailed discussion on how its (QuEST’s) advantages manifest.
The fact that QuEST enables stable training across scales has the following implications:
1. If using QuEST, INT4 is the “optimal” bit-width for weights and activations in terms of inference effectiveness, that is, the accuracy that can be obtained at a given model size. This is a strict improvement relative to STE, which was found in Kumar et al. (2024) and in our experiments to only provide competitive low-bit training at 7-8 bit weights and activations.
2. As the reviewer remarked, this finding holds and transfers across all model scales: thus, future large-scale pre-training runs could leverage this technique to produce highly-accurate models with low precision.
3. Our approach also enables a direct comparison between the “effectiveness” of different precisions P, namely the efficiency factor eff(P). Thus, based on runs on small models, the user can determine the “optimal” precision for a given model architecture and hardware target (which may influence the set of precisions supported).
To validate our findings at a larger scale, we trained a 1.2B-parameter model over 40B tokens (2’000 A100 GPU hours), which was the largest size we could train on our academic cluster. The accuracy findings, including C4 loss and 0-shot evaluations, confirm that our findings indeed scale to this larger model size: https://github.com/QuEST2025/speedup/blob/main/1200M.md
We are currently working towards a 3B-parameter run on 300B tokens (~40’000 GPU hours), but we hope the reviewer can appreciate that this is very computationally-intensive and does not fit within the scope of the rebuttal.
> Detailed explanations of [GPU Kernel] optimizations.
We believe there may be a slight confusion here. As stated in the paper, we utilize the highly optimized CUTLASS operations for the “raw” matrix multiplications in both 16-bit (BF16) and 4-bit (INT4) precisions. Thus, these basic operations are heavily optimized by NVIDIA, the makers of the library and the hardware. Our main focus is to reduce the inference overheads over the QuEST format: that is, quantization/dequantization, clipping and Hadamard multiplication. This is overviewed in Section 5, and we would be happy to describe it further in the discussion.
> The paper only compares QuEST with BF16, which is insufficient to comprehensively evaluate the effectiveness of the method.
Our speedup ratio experiments compare: the BF16 baseline, our approach (QuEST), and an idealized 4bit version which does not require the Hadamard multiplication (No HT).
We use BF16 data type in the baseline because
1. Our experiments use smaller-scale Llama-3 models. The full-precision data type for Llama-3 models is BF16.
1. BF16 and FP16 are common weight types in the popular open-source models (e.g., Llama, Qwen, etc.). BF16 and FP16 are equally supported by modern GPUs (e.g., Ampere and Hopper architectures). They have the same computational performance. The results for BF16 also apply to FP16.
Our BF16 baseline is the official NVIDIA libraries (i.e., cuBLAS/cuDNN) that implement the GEMM routine used under-the-hood by e.g. PyTorch. These codes have been heavily optimized by NVIDIA to achieve near-optimal performance on their hardware.
The results show that:
1. Our approach leads to significant inference speedups relative to the extremely competitive full-precision baseline, which is standard for all open models, especially on large matrices. Note that this occurs at the same or better accuracy (as per our results).
1. The overheads of our format, including activation clipping and Hadamard transforms, are small (less than 10% on average, and 30% in the worst case).
1. Our results are not surprising, since they track well with the expected speedups from low-precision GEMM for the corresponding matrix sizes.
We would be happy to run additional comparisons that the reviewer would find relevant in this context.
> Tests should be conducted with different sequence lengths and batch sizes.
We agree; to address this, we conducted more tests on different sequence lengths and batch sizes. You can find the speedup results using this link https://github.com/QuEST2025/speedup/blob/main/tables.pdf
In addition, we conducted further experiments on the scalability of our CUDA kernels in isolation, by increasing the arithmetic intensity of the GEMM problems. The following table shows the speedup achieved with a fixed N = 4096 while varying the M (batch times sequence length) and K (hidden) dimensions.
| m \ k | 2048 | 4096 | 8192 | 12288 | 16384 |
|--------|-------|-------|-------|-------|-------|
| 1024 | 1.84x | 2.87x | 4.31x | 5.00x | 5.55x |
| 2048 | 1.94x | 3.23x | 4.83x | 5.58x | 6.07x |
| 4096 | 2.17x | 3.78x | 5.44x | 5.86x | 6.27x |
| 8192 | 2.24x | 4.04x | 5.86x | 6.22x | 6.28x |
We hope this clarifies your concerns, and are happy to continue the discussion if the reviewer finds it necessary. | Summary: This paper explores how to improve quantization aware training. Following recent work in post-training quantization, this paper proposes a combination of techniques it calls QuEST. QuEST involves using a Hadamard Transform in the forward pass to improve the quantization process and introduces "trust estimation" to improve gradient estimation in the backward pass. The evaluation shows QuEST improves the tradeoff between model performance at a given precision. A new precision scaling law is fit to these results. The evaluation also shows that at 4-bits the resulting model is faster in evaluating layers than when using BF16 on RTX 4090 hardware.
Claims And Evidence: The claims of improved QAT results seem to rely mainly on a comparison to results from a five year old paper, LSQ, (Esser et al., 2019) as shown in Figure 3. This does not seem to me to be sufficient evaluation to establish QuEST as a new SOTA for QAT.
The claim of improved efficiency seems somewhat orthogonal to the proposed approach (even if supported by empirical evaluation). Using lower precision can be more efficient was, I thought, well known. That said, it was unclear to me how well optimized the baseline in Figure 6 is (i.e., is it using cuDNN or the software flow described in Section 5, but on FP16?)
Methods And Evaluation Criteria: The benchmarks and backbone architecture employed make sense.
Theoretical Claims: The paper does not make specific theoretical claims.
Experimental Designs Or Analyses: One objection to the paper is lack of quantitative comparison with more recent QAT training proposals.
Supplementary Material: I briefly reviewed the supplemental material in Appendix A.
I appreciate the authors have included their code as .zip file. I briefly looked at some of the CUDA (.cu) files to get a sense of what they contained. It would be helpful if either the README.md in the .zip were expanded to document or an appendix were included in the PDF to document the code structure in more detail (the only guide to understanding the code appears to be one column of text in Section 5). I was not able to see for example, how the 'trust factor' is implemented in the code.
Relation To Broader Scientific Literature: The main questions in QAT are how to quantize and how to estimate and apply the gradient. The paper is attempting to innovate on both fronts. The use of Hadamard Transform to improve quantization proposed in the paper seems very similar to that in recent closely related works on post-training quantization (e.g., QuIP#). The "trust estimation" approach may be somewhat more novel.
Essential References Not Discussed: I didn't see AdaBin https://arxiv.org/abs/2208.08084 (ECCV 2022) cited and I thought that work was also doing quantization aware training.
I also was thinking some of the earlier works on binary neural networks ought to have been cited and discussed. E.g.,
Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Training deep neural networks with low precision multiplications. arXiv, 2014.
Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In NIPS, 2015.
Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv, 2016.
Other Strengths And Weaknesses: Section 5 provides fairly high level details of the GPU implementation. Adding more documentation to the .zip or an appendix would make it easier to check how the provided code implements the high level algorithms presented in the paper.
Additional ablation studies would help the paper. There is some data in Figure 5(c) showing the impact of the HT on QuEST, but this type of evaluation should be expanded. More insight into how the 'trust factor' impacts the results could be provided through ablation studies.
Other Comments Or Suggestions: Some of the writing could be improved. For example:
- In the abstract, what is meant by efficient? how efficient?
- Page 1: It seems incorrect to refer to "Pareto-optimal frontier" as a metric. What does it mean for 8-bit precision to be pareto-optimal for QAT? The word Pareto does not seem to appear in Kumar et al. (2024) and I'm not sure what the authors of the present articles mean.
- The notation in Equation 1 seems non-standard. Usually the argument to the rond to nearest operator is not between [-1,1]. Also, if there is some notion of quantization precision that is often typically made explicit in terms of number of quantization levels or number of bits used, but the formulation in Equation 1 seems to show neither.
- Perhaps I'm missing something but to me it seems the $\gamma$ in the sentence on lines 140-142 is irrelevant to the relationship in that sentence, which in any case would appear trivially true regardless of the smoothness of the loss function based only on the definition of $S_{small}$ on line 130.
*** POST REBUTTAL COMMENTS ***
Thank you for the rebuttal response. Unfortunately, I was unable to get to this before the too short timeline set by the conference organizers and it seems like I can no longer post a response below. So, I'll just update the text of my review with my thoughts after your response.
Thanks for clarifying LSQ as SOTA and the particular taxonomic differences (reading through the Kumar et al. 2024 reference was also helpful in this regard).
I am still wondering where the 'trust factor' is implemented in your code. If you are still able to respond (which I guess unfortunately you cannot), please point us to the files/lines.
Regarding the "inconsistent SGD iterations" references cited, as I understood them they show a specific approximation (e.g., sparsity with certain properties) converges, but the relationship of what the process converges to with the original (local) optimum wasn't clear to me. For example, the description of the example in Figure 2 in Lin et al. (https://arxiv.org/pdf/2006.07253) seems to make the case that gradients for the dense network point in a wrong direction when viewed from the sparse network. So, 'trusting' those gradients might be misleading.
Kumar et al. (2024) in Section 4.3.2 middle plot of Figure 6, "Empirical", shows 6-bits doing better than 8-bit for "Predicted" on the leftmost plot for floating point. I see the leftmost is for integer but the caption says "predicted" (I assuming from the scaling law fit) making it unclear to me whether the 8-bit optimum point would hold up in practice (i.e., "empirically").
However, empirically, the results in your submission do seem to show an improvement to SOTA and going through the submission another time, I think I now get the intuition for why QuEST works, so I'm leaning towards raising my score.
Questions For Authors: I'm not convinced that the 'trust factor' approach to masking out gradients is well motivated. The data in Figure 2 shows that this masking obviously makes the resulting direction closer to a full precision gradient, but this seems to counter the goal of QAT in that the idea is to perform gradient decent accounting for the noise introduced by quantization (so the optimizations paths should differ). If you can clarify your motivation for the trust factor, and better back that up quantitatively or theoretically, that may help the paper. Please comment.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the detailed review! We address all your questions below.
> I'm not convinced that the 'trust factor' approach to masking out gradients is well motivated.
Thank you for raising this. First, trust factor masking is motivated theoretically by directly targeting the source of error in the QAT iteration, relative to standard SGD.
Specifically, the “ideal” SGD iteration is $ x_{t+1} = x_t - \nabla L( x_t) $. Instead, in QAT we execute $ x_{t+1} = x_t - \nabla L( Q(x_t) ), $ where $Q(x_t)$ is quantization. The “error” between these iterations is precisely the $ \| \nabla L( Q(x_t) ) - \nabla L( x_t ) \|$ term we seek to minimize.
The theory of inconsistent SGD iterations shows that this “error” directly impacts SGD convergence: see the work of Nadiradze et al. (https://arxiv.org/pdf/2001.05918), who bounded this error by applying smoothness, and Lin et al. (https://arxiv.org/pdf/2006.07253) who investigate the same for sparse projections. In this context, our work investigates a fast heuristic for minimizing this error in the context of quantization.
Second, our practical results confirm that trust factors are key to good practical convergence.
Besides Figure 2, we also illustrated this in Appendix Figure 10, which showed that properly tuned trust factors lead to much better loss than both clipping (s = 1) and STE (large s).
> There does not seem to be sufficient evaluation to establish QuEST as a new SOTA for QAT.
Note that, generally, QAT methods usually fall into two categories:
1. General schemes that can be applied to any bitwidth (such as STE, PACT, LSQ, and QuEST).
2. Schemes specialized to some bit-widths, e.g. binarization, such as AdaBin.
For general schemes, LSQ was SOTA: e.g., recent work from IBM & MIT (https://arxiv.org/abs/2404.03605) on outlier suppression still uses a variant of LSQ, whereas the recent work of Kumar et al. only uses STE.
Since our work is focused on general scaling laws, we did not consider specific binarization schemes, as they do not directly “port” across bit-widths.
To fully address your concern, we have ported AdaBin to our setting and compared it with QuEST at 30-100M scale.
Remarkably, we found that QuEST consistently outperforms AdaBin in the W1A1 setting (for which AdaBin is specifically designed). Moreover, the stripped-down QuEST *without the Hadamard* recovers the performance of AdaBin:
https://github.com/QuEST2025/speedup/blob/main/AdaBin.md
We hope this clarifies our positioning: we believe QuEST is indeed the new SOTA for general QAT.
> The claim of improved efficiency seems somewhat orthogonal.
The goal of our kernels is showing that QuEST models can execute fast; this is not obvious since they require a Hadamard multiplication and dynamic clipping of the activations on the forward pass.
> It was unclear to me how well optimized the baseline in Figure 6 is.
The baseline in Figure 6 is near-optimal for BF16; please see more details in our reply to Reviewer `HH36`.
**Editorial comments and comments about supplementary material.**
Thank you for the detailed examination and useful editorial notes! We will address all these in the next revision, and add a more detailed README for the CUDA code structure.
> The use of Hadamard Transform seems similar to PTQ (e.g., QuIP#).
Please see “Regarding the Hadamard Transform (HT)” part of the reply to Reviewer `Xwq9`.
> It seems incorrect to refer to "Pareto-optimal frontier" as a metric.
Pareto-optimality is defined in the introduction (page 1, col.2, l.49-50), and follows Frantar et al., ICLR24. We say that an approach X (e.g., QuEST INT4) is Pareto-superior to Y (e.g., STE INT8) if X provides better accuracy at the same model size, or, symmetrically, smaller size at the same accuracy. Figure 1 shows that QuEST INT2 is Pareto-superior to BF16 pretraining, but inferior to QuEST INT4. Thus, QuEST brings the “optimal” precision in terms of accuracy-vs-size to INT4, since no other method dominates QuEST INT4 across sizes.
Kumar et al. (2024) don’t use the same terminology, but their metrics are similar. E.g., in Section 4.3.2 of their paper, they solve for P*, the precision that yields minimal loss at some model size, which is the “Pareto-optimal” precision.
We used a similar setting to Kumar et al. (2024) for the scaling law study, but improved significantly on their findings in terms of optimal precision via a new method: QuEST brings down the “optimal” training precision to 4bit, down from the 7-8bit precision found to be “optimal” for STE by Kumar et al. (2024).
> ...the sentence on lines 140-142 is true regardless of the smoothness of the loss function
Indeed, we could obtain a bound of $T^2 |S_{small}|$ just by summing over indices $k$ in $S_{small}$. We used smoothness since if $\gamma < 1$ we would get a better bound $\gamma^2 T^2 |S_{small}|$. We thank the reviewer for this note and will simplify the derivation to not use smoothness. | null | null | null | null | null | null | null | null |
Log-Sum-Exponential Estimator for Off-Policy Evaluation and Learning | Accept (spotlight poster) | Summary: The paper introduces a Log-Sum-Exponential (LSE) estimator to address off-policy evaluation (OPE) and learning (OPL) in contextual bandit settings where rewards or propensity scores may be noisy or heavy-tailed. By applying a log-sum-exp transformation over importance-weighted rewards, this estimator improves robustness and variance control. The main theoretical contributions include a bound on the regret ($\mathcal{O}\left(n^{-\frac{t}{1+\varepsilon}}\right)$) under a ($1+\varepsilon$)-th moment assumption for the weighted reward and bias–variance analyses showing LSE can reduce variance compared to IPS. Experiments on synthetic data and supervised-to-bandit tasks (e.g., EMNIST) demonstrate lower MSE and higher accuracy than standard baselines (IPS variants, ES, PM, IX, etc.).
Claims And Evidence: Major Claims
- Lower variance and bias–variance trade-off: By taking a log-sum-exp over the weighted rewards, the estimator becomes less sensitive to large outlier samples.
- Robustness under heavy-tailed reward distributions: They provide a theoretical analysis under a ($1 + \varepsilon$)-th moment assumption, which accommodates unbounded or heavy-tailed random variables.
- Favorable regret/convergence bounds in off-policy learning: They prove an $\mathcal{O}\left(n^{-\frac{\varepsilon}{1+\varepsilon}}\right)$ convergence rate, which interpolates between $\mathcal{O}(n^{-1/2})$ and $\mathcal{O}(n^0)$ depending on the bounding of higher moments.
- Empirical gains: The authors demonstrate in synthetic and real-ish tasks (EMNIST or KUAIREC-like data) that the LSE estimator can outperform baselines in terms of MSE (in OPE) or final policy accuracy (in OPL).
Evidence
- Theory: Theorem 5.3 (regret bounds) and Propositions 5.5/5.7 (bounds on bias and variance) offer analytical proof that LSE yields guaranteed convergence rates and improved variance control, provided certain moment assumptions.
- Experiments: On synthetic data, the LSE exhibits lower MSE and variance than standard IPS or other model-free estimators (PM, ES, IX, OS). On EMNIST-based bandit tasks, it achieves good accuracy even with noisy reward signals or estimated propensity scores.
These results collectively support the authors’ main claims. Some details (like data-driven choice of \lambda) are shown in appendices, along with further ablations and real-data results.
Methods And Evaluation Criteria: Core method:
- The LSE estimator, $V_{\text{LSE}}^\lambda$, applies $\frac{1}{\lambda} \log \left(\frac{1}{n} \sum_{i=1}^{n} e^{\lambda r_{i} w_{\theta}\left(a_{i}, x_{i}\right)}\right)$ with a tunable parameter $\lambda<0$.
Evaluation:
- In OPE, they empirically compare MSE, variance, and bias across multiple estimators.
- In OPL, a learned policy $\pi_{\theta}$ is optimized by maximizing the LSE-based objective. The main metric is regret or final classification accuracy after learning.
- Baselines: The paper carefully compares against standard IPS variants (e.g., truncated IPS), exponent smoothing (ES), power-mean (PM), self-normalized IPS (SNIPS), and so on.
All these benchmarks and metrics (variance, MSE, accuracy, regret) align with accepted practice in bandit OPE/OPL research.
Theoretical Claims: The authors present:
- Regret Analysis (Theorem 5.3 & Proposition 5.4): Shows the LSE-based OPL converges at $\mathcal{O}(n^{-\tfrac{\varepsilon}{1+\varepsilon}})$ under a heavy-tailed assumption on weighted rewards.
- Bias and Variance Bounds (Propositions 5.5 & 5.7): Provide an upper bound on LSE’s bias in terms of $|\lambda|^{\varepsilon}$ and a straightforward variance bound that is no greater than that of IPS under second-moment assumptions.
- Robustness: Theorem 5.9 addresses noisy reward distributions, bounding the regret to show that total variation distance from the clean distribution plus the LSE’s own hyperparameter $\lambda$ determine the final bound.
These proofs are given at length in the appendices, citing standard concentration inequalities (Bernstein, etc.) and carefully handling non-linear transformations. The logic in the statements is consistent, and no major errors stand out in the derivations.
Experimental Designs Or Analyses: - Synthetic OPE: They design heavy-tailed reward distributions (Pareto-like or exponential) and random logging vs. target policies, measuring bias/variance/MSE across 10K replications.
- Supervised-to-bandit OPL: On EMNIST, the authors transform classification data into bandit feedback with partial logging. They vary logging policy quality (temperature \tau) and introduce both noisy propensity scores (modeled via inverse Gamma) and flipping reward noise.
- Baselines: They compare with up to 7–8 standard baseline methods.
- Metrics: MSE in OPE; classification accuracy for learned policies in OPL.
- Findings: LSE typically has lower MSE and variance than baselines in OPE, and yields better policy performance in OPL, especially under heavier reward tails or noisier propensity scores.
These experiment designs match the paper’s stated goals, although the authors might highlight more real-world scale tasks in future. The demonstration is nonetheless credible for a conference submission.
Supplementary Material: The authors present:
- Extended proofs of the theorems (in Appendix D), including details on how the heavy-tail assumption is invoked.
- Additional experiments: e.g., using the Lomax distribution for logging and target policies, exploring real-ish data from KUAIREC, sensitivity to $\lambda$, ablations on sample size, comparisons with older baselines, etc.
- PAC-Bayes viewpoint, data-driven selection of $\lambda$, and additional details about the gamma noise for estimated propensity scores.
I have skimmed through the relevant appendices. The supplementary materials add clarity on both theoretical derivations and experiment details.
Relation To Broader Scientific Literature: - The paper extends model-free OPE approaches (IPS, truncated IPS, ES, PM, IX, etc.) by introducing a non-linear transformation that can handle outliers.
- The approach is reminiscent of “robust mean estimation” under heavy tails (median-of-means, trimmed mean), but specialized to importance-weighted bandit feedback.
- The results connect to well-known bandit or offline RL settings under “pessimistic” or distribution-shift assumptions, though here it focuses specifically on the LBF dataset with (potentially) heavy-tailed or noisy rewards.
Overall, the authors do cite standard references (IPS, ES, PM, etc.) and situate their work among current OPE/OPL techniques. Additional connections to sub-Gaussian bounding and robust M-estimators are also discussed.
Essential References Not Discussed: The paper covers the most relevant prior model-free approaches to mitigate variance in off-policy evaluation (IPS, truncated, ES, PM, etc.). It also references works on heavy-tailed or robust bandit algorithms. No obviously critical references are missing. I believe the submission includes enough prior work for the standard context.
Other Strengths And Weaknesses: Strengths
- This log-sum-exp transformation is conceptually simple yet yields robust performance for unbounded or noisy data.
- Theoretical thoroughness: They provide formal bias-variance bounds and regret guarantees that unify heavy-tailed analysis with a single hyperparameter $\lambda$.
- Robustness: The authors show how the LSE estimator can remain stable even with substantial reward noise or propensity mis-specification.
- Empirical validations: They compare thoroughly to an array of baselines.
Weaknesses
- The paper discusses data-driven methods in the appendices, but real deployments might need more straightforward or adaptive selection heuristics.
- The approach is a single-pass aggregator, so computational complexity is no worse than other OPE methods, but potential memory or tuning overhead for extremely large datasets is not deeply addressed.
- They rely mostly on small to medium transformations (EMNIST or partial KUAIREC). The method’s performance in large, production-scale logs (with tens of millions of rows) is not tested.
Other Comments Or Suggestions: See the above weaknesses section.
Questions For Authors: How does one best pick $\lambda$ in practical scenarios? Is there a recommended cross-validation strategy you find most reliable?
Could the LSE concept apply similarly in off-policy RL with trajectories? Are there any theoretical or practical obstacles?
Have you tried combining LSE with direct reward modeling or a doubly robust approach? If so, does the non-linear structure complicate it?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful reading and comments on the paper. We will address their concerns as detailed below.
> single-pass aggregator
**R1:** Thanks for raising this point. It is true that theoretically, LSE is not separable and should be applied to the whole dataset. But, due to the nice gradient form of LSE, complete stochastic optimization of LSE is also possible for large-scale datasets.
Suppose that $x_1, ..., x_{kl}$ is the data of $k$ batches of size $l$. LSE is a quasi-arithmetic mean function. So we have,
$$\\mathrm{LSE}(x\_1, ..., x\_{kl}) = \\mathrm{LSE}\\left(\\mathrm{LSE}(x\_1, ..., x\_l), \\mathrm{LSE}(x\_{l \+ 1}, ..., x\_{2l}), ..., \\mathrm{LSE}(x\_{(k-1)l \+ 1}, ..., x\_{kl})\\right) \\\\
=\\mathrm{LSE}\\left(A\_1, A\_2, ..., A\_k\\right)
$$
Now we have,
$$\\nabla\_{\\theta}\\mathrm{LSE}(x\_1, ..., x\_{kl}) \= \\sum\_{i=1}^{k}\\frac{d}{dA\_i}\\mathrm{LSE}(A\_1, ..., A\_k)\\nabla\_{\\theta}A\_i \= \<\\mathrm{softmax}(\\lambda \\mathbf{A}), \\nabla\_{\\theta}\\mathbf{A}\>
$$
So the gradient of the LSE on the entire data is a weighted average of the gradient of each batch, but these weights are not uniform as in the case of monte-carlo mean. We create the following procedure for SGD optimization.
We store a coefficient $c\_i$ for each $A\_i$. Our final objective is to find $c_i$ such that $c_1, ..., c_k$ are proportional to $\\mathrm{softmax}(\\lambda A\_i)$. For this to happen, suppose we are at step $t \+ 1$ and we have, $c_1, ..., c_t$ as the coefficients of $A_1, ..., A_t$. We now want to find $c_{t + 1}$ and apply GD on $c\_{t \+ 1}A\_{t \+ 1}$. It is sufficient to have the following equality,
$$ \\frac{e^{\\lambda A\_{t \+ 1}}}{e^{\\lambda A\_{t}}} \= \\frac{c\_{t+1}}{c\_t} $$
Hence, at step $t$, we apply gradient descent this way:
$$ c\_{t \+ 1} = c\_t e^{\\lambda(A\_{t+1} \- A\_{t})} \\theta^{(t \+ 1)} = \\theta^{t} \- \\eta c\_{t \+ 1}\\nabla\_{\\theta}A\_{t \+ 1} \\\\
$$
where $\\eta$ is the learning rate and $c\_1=1$. We will add this procedure for batch optimization as a discussion in the paper.
> KUAIREC and real-world dataset
**R2:** Thank you for the insightful comment. We agree that evaluating performance on production-scale datasets is crucial. KuaiRec’s 12M-interaction (big) dataset (7K users, 10K items) was used for training/validation, while a denser subset (small) was used for testing. The smaller subset was used solely for evaluation, given its density. We will clarify this in the final version to emphasize that our method has indeed been evaluated on a dataset of production-level scale. Furthermore, we conducted more experiments on more datasets, including the [**PubMed 200K RCT dataset**](https://anonymous.4open.science/r/icml2025_response-F5FC/response_rct.md).
> Choosing $\lambda$ in practical scenarios
**R3:** We have 3 types of $\lambda$ selection throughout the paper. One is based on validation data performance (gridsearch), one is data independent selection (App G.7), and the last one is data-driven selection (App G.8). The first method gives better performance and is recommended when it's computationally feasible, but for the large-scale datasets, the data-driven approach can work especially well, because it doesn't require any prior, or any heavy computations and can find a suitable $\lambda$ according to the data.
> LSE in off-policy RL with trajectories
**R4:** Thank you for the insightful suggestion. In principle, the LSE operator could be applied in off-policy reinforcement learning, potentially serving as an alternative to traditional importance sampling. However, it brings theoretical (e.g., requiring i.i.d. assumptions) and practical challenges (e.g., computational overhead in algorithms like PPO [1]). We consider this a promising direction and plan to explore it in future work.
> LSE with direct reward modeling or a doubly robust approach
**R5:** Thank you for the suggestion. Estimators that incorporate reward modeling fall under model-based approaches, while those that do not are considered model-free. In this work, we primarily focus on model-free estimators. However, we did explore combining our estimator with the doubly robust (DR) approach, as discussed in Appendix G.3, and found that the resulting DR-LSE variant outperforms other baselines. Importantly, the non-linear structure of LSE does not introduce significant complications in this integration. For a fair comparison, we did not explore reward modeling based directly on the LSE framework, as our definition of LSE is grounded in weighted rewards rather than direct reward estimation. Nonetheless, we find this direction promising and plan to consider it in future work.
---
**References:**
[1]- Schulman, John, et al. "Proximal policy optimization algorithms." | Summary: The paper introduces a novel Log-Sum-Exponential (LSE) estimator for off-policy evaluation (OPE) and off-policy learning (OPL) in reinforcement learning, especially when dealing with logged bandit feedback datasets that may contain unbounded or heavy-tailed rewards. The paper analyzes the LSE estimator's regret bounds, bias, and variance, and it also explores its robustness to noisy rewards and propensity scores. The LSE estimator's performance is empirically compared against several baseline estimators, including truncated IPS, PM, ES, IX, Bandit-Net, LS-LIN, and OS, using both synthetic and real-world datasets. The document also offers theoretical insights into why the LSE estimator is well-suited for scenarios with heavy-tailed reward distributions and provides guidelines for selecting the estimator's parameter λ. The experimental code and data are also provided to support claims about its effectiveness.
Claims And Evidence: The claims of this paper are generally supported by the argument in this paper.
Methods And Evaluation Criteria: All evaluations are based on simulations. The results will be more convincing if data from randomized controlled trials can be used to validate the proposed method.
Theoretical Claims: The theoretical claims and proofs of the paper are rigorous and valid.
Experimental Designs Or Analyses: The simulation experiments are comprehensive and convincing. That said, I think the experiments and validation could be stronger if data from RCTs can be used to evaluate the proposed LSE-based method.
Supplementary Material: The supplementary materials are comprehensive.
Relation To Broader Scientific Literature: The main contributions of this paper are three-fold. First, the authors develop a novel non-linear estimator based on the LSE operator, which substantially reduces the variance. Second, comprehensive theoretical performance guarantees of LSE-based OPE and OPL are provided. Third, simulated experiments show that the proposed estimator performs well compared with other SOTA algorithms.
Essential References Not Discussed: Not aware of.
Other Strengths And Weaknesses: I think the paper can be strengthened with evaluations based on experimental data.
Other Comments Or Suggestions: N.A.
Questions For Authors: Can the authors use RCT data to validate the proposed method?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments, and generally positive assessment of the paper. We will address their concerns as detailed below.
> The simulation experiments are comprehensive and convincing. That said, I think the experiments and validation could be stronger if data from RCTs can be used to evaluate the proposed LSE-based method. Can the authors use RCT data to validate the proposed method?
**R1:** We appreciate the reviewer's suggestion for running experiments in RCT dataset. The result can be found in the [**following link**](https://anonymous.4open.science/r/icml2025_response-F5FC/response_rct.md). | Summary: The paper proposes an estimator based on the log-sum-exponential (LSE) operator designed for off-policy evaluation (OPE) and off-policy learning (OPL) in contextual bandit settings. The LSE estimator addresses the issue of high variance in inverse propensity score (IPS) estimators by introducing robustness to noisy propensity scores and heavy-tailed reward distributions. The authors provide theoretical guarantees on the bias, variance, and regret for this estimator, with particular focus on its performance under heavy-tailed assumptions on weighted rewards. Empirical results from synthetic experiments and real-world datasets validate the practical effectiveness of the proposed method compared to existing estimators like IPS and others.
Claims And Evidence: The paper makes different claims, which seem to be supported by theoretical results and empirical evidence.
First, it claims that the LSE estimator reduces variance and handles heavy-tailed reward distributions more effectively than the IPS estimator and its variants. This claim is supported by a detailed theoretical analysis, which includes bounds on bias, variance, and regret.
Empirically, the paper shows that the LSE estimator performs better in terms of mean squared error (MSE) and variance compared to competing methods in both synthetic and real-world experiments. This part, in my opinion, can be strengthened with an additional experiment.
Methods And Evaluation Criteria: The proposed LSE estimator is evaluated both theoretically and empirically. This is the usual way of evaluating OPE/OPL methods.
Theoretical Claims: From what I could check, the theoretical claims in the paper seem well-supported.
Experimental Designs Or Analyses: The experimental design appears sound, but there are some limitations in the scope of the experiments. The experimental setup could benefit from more diverse datasets and a wider range of experimental conditions. For instance, experiments involving different types of reward distributions and more real-world applications
Supplementary Material: The supplementary material, including proof details, additional experiments, and discussion on related work, is thorough. However, there could be further clarification regarding the role of the smoothing parameter $\lambda$ and its effect on the performance across different types of reward distributions.
Relation To Broader Scientific Literature: The paper is related to prior work in the area of off-policy evaluation and learning.
Also, it is related to heavy-tailed bandits, which is not typical in the OPE/OPL literature. I think that the authors did a good job in their related work sections, which position the paper with respect to specific related prior papers.
Essential References Not Discussed: I think that the authors did a good job in their related work sections and all the essential references are more or less present in the paper.
Other Strengths And Weaknesses: In my opinion, the main weakness of the paper (apart from the experimental section which could be expanded, a point already discussed) is the apparent lack of novelty. Essentially, the paper applies the well-known log-sum-exponential (LSE) technique to OPE/OPL
However, despite this perceived lack of novelty, I believe the paper still meets the high standards of ICML due to its interesting theoretical analysis and the strong performance of the LSE estimator in the OPE/OPL domains. To the best of my knowledge, LSE has not previously been applied to OPE/OPL, and this work may represent the first contribution demonstrating that LSE can be a valuable addition to the OPE/OPL toolkit.
Other Comments Or Suggestions: Very minor issue: there seem to be some inconsistencies in the References section
Questions For Authors: - Could you clarify how the smoothing parameter $\lambda$ affects the performance of the LSE estimator?
- Have you considered experimenting with a broader range of real-world datasets to assess the robustness of the LSE estimator in different domains?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments, and generally positive assessment of the paper. We will address their concerns as detailed below.
> Novelty
**R1:** We appreciate the reviewer’s feedback and the opportunity to clarify the novelty of our work. While it is true that we employ the log-sum-exponential (LSE) technique, one of our key contributions lies in the novel theoretical results we derive in the context of OPE/OPL. Specifically, our analysis establishes new insights that were not previously explored in the literature. These results go beyond a straightforward application of LSE, as we introduce a novel formulation and provide rigorous proofs that reveal deeper theoretical properties of the method in this setting under heavy-tailed assumptions.
Additionally, while LSE is a well-known technique/operator, its application to OPE/OPL presents unique challenges, which we address through our theoretical analysis. We believe these contributions offer a meaningful advancement in understanding LSE for heavy-tailed applications.
We hope this clarification helps address the reviewer’s concern, and we would be happy to further elaborate on the specific theoretical novelties if needed.
> For instance, experiments involving different types of reward distributions and more real-world applications. Have you considered experimenting with a broader range of real-world datasets to assess the robustness of the LSE estimator in different domains?
**R2:** We conducted more experiments on RCT dataset where the results can be found in the [**following link**](https://anonymous.4open.science/r/icml2025_response-F5FC/response_rct.md). In addition, additional experiments including multiple reward distributions are conducted. We fixed the distribution of the policies to be Gaussians with different locations. To test a variety of heavy-tailed distributions, we assume that we have a single state $s=s_0$ and when $a|s_0 \sim \pi_0$, the distribution of $r(s_0, a)$ is of a particular family. We considered Lomax, Generalized Extreme Value (GEV), T-student, and Fréchet distributions. We consider the absolute value of the samples to ensure the reward is positive. The table of the performance of our method compared to other methods is [**available here**](https://anonymous.4open.science/r/icml2025_response-FC68/response_Ve41_R2.md).
> Could you clarify how the smoothing parameter $\lambda$ affects the performance of the LSE estimator?
**R3:** The effect of $\lambda$ for the supervised2bandit experiments where the reward is binary is investigated in App G.6, but for continuous heavy-tailed reward distributions, in the same setting mentioned in R2, we change $|\lambda| = \\{10^i | -3 \leq i \leq 2, i\in \mathbb{Z}\\}$ observe the change of MSE of the LSE estimator. The graphs are available at [**this link**](https://anonymous.4open.science/r/icml2025_response-F5FC/response_Ve41_R2.md).
> inconsistencies in the References section
**R4:** Thanks for pointing out. It is fixed now.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the additional experiments and the clarifications!
They addressed many of my concerns. I raised my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer Ve41,
We just wanted to sincerely thank you for taking the time to carefully read our rebuttal and for your thoughtful consideration of our responses. We truly appreciate the constructive feedback you provided throughout the review process, and we are grateful for your support and for the updated evaluation of our work.
Your detailed comments and suggestions were very helpful to us in improving our paper, and we are glad that our clarifications could address your concerns.
Thank you again for your time, effort, and support.
Best regards,
Authors | Summary: The paper proposes to use the log-sum-exponential operation as an off-policy estimator, proving bounds on the mean and variance of the estimates, as well as the performance gap between the optimal and learned policies in off-policy learning, and convergence rates. They follow this up with empirical evaluations.
The chosen estimator is chosen specifically to deal with the heavy-tailed distribution resulting from inverse propensity weights, so their analysis holds under heavy-tailed assumptions, which additionally holds for unbounded rewards, making the analysis applicable in a variety of domains.
Claims And Evidence: There are two sets of claims to the paper.
The first is that the proposed estimator reduces variance, and has provable bounds on bias and variance with heavy tailed and noisy reward observations. Further, they provide regret bounds in the off-policy evaluation and learning setups, as well as the regret convergence under heavy-tailed settings which include unbounded reward. These claims are primarily and adequately supported by theoretical results.
The second set of claims is that the proposed estimator has competitive performance.
Methods And Evaluation Criteria: Yes, they do a standard setup (running various estimators with 10k trials of taking 1k samples, and calculating mean squared error and variance) for OPE on a synthetic data distribution and OPL on the EMNIST dataset.
Theoretical Claims: Not carefully
Experimental Designs Or Analyses: Yes, the experimental designs for OPE and OPL were checked
Supplementary Material: I poked through the proofs but not in detail
Relation To Broader Scientific Literature: They compare against several similar off-policy estimators
Essential References Not Discussed: None that I am aware of
Other Strengths And Weaknesses: The work is original as far as I know.
The work is significant, being applicable to a common setting and addressing a key problem of heavy-tailed rewards.
The paper is consistently clearly written.
Other Comments Or Suggestions: none
Questions For Authors: It seems that these results could potentially be related to a Bayesian setting with exponential family distributions. Have the authors thought about this? I'm curious about the results, and it would increase my (already positive opinion of) the paper, even though I don't think it needs to be included here.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments, and generally positive assessment of the paper. We will address their concerns as detailed below.
> It seems that these results could potentially be related to a Bayesian setting with exponential family distributions. Have the authors thought about this? I'm curious about the results, and it would increase my (already positive opinion of) the paper, even though I don't think it needs to be included here.
**R1:** We thank reviewer 2WnN for their insightful suggestion. An interesting direction for future work is to explore potential application of theoretical results in Bayesian inference frameworks, particularly through the lens of exponential family distributions and variational approximations. As the LSE can be interpreted as log-partition function in exponential family distribution, our methods can be applied in this field. | null | null | null | null | null | null |
CoMemo: LVLMs Need Image Context with Image Memory | Accept (poster) | Summary: - This paper proposes CoMemo, a hybrid architecture of LLaVA and Flamingo’s cross-attention module.
- It also has a 2D positional encoding mechanism (RoPE-2D) to better preserve spatial relationships, addressing the issue of visual information neglect in long contexts.
- Experimental results across multiple benchmarks indicate that CoMemo improves performance on tasks requiring long-context comprehension and multi-image reasoning.
Claims And Evidence: The authors claim their contributions are equally split (50% each), but both are weak:
1. Novelty of CoMemo: The authors present CoMemo as a novel architecture, but it is essentially a hybrid of LLaVA and cross-attention (Flamingo-style). While hybridity itself isn't an issue, the same architecture was introduced earlier, e.g., Q-Former in BLIP-2 [1] (published January 2023), which is neither cited nor included in comparisons.
2. Extending RoPE to 2D: The authors claim to be the first to extend RoPE to 2D, but this is incorrect. The concept was already introduced as M-RoPE in Qwen2-VL (published September 2024), which they cite and include in Table 1. They attempt to differentiate their work in lines 412–418, arguing:
> "In contrast, Qwen-VL2 (Wang et al., 2024b) introduced M-RoPE, a multi-dimensional position encoding extending RoPE to three channels (temporal, height, width) for im- ages and videos. However, this increases computational costs, tripling the original RoPE overhead and introducing redundancy by requiring these channels for text tokens."
>
However, this claim is incorrect. The Qwen-VL2 paper explicitly states that when processing 2D images, the temporal dimension remains constant, meaning no additional computational cost:
> "When processing images, the temporal IDs of each visual token remain constant, while distinct IDs are assigned to the height and width components based on the token’s position in the image."
>
3. Additionally, the evaluation is weak, with few baselines, missing values in Table 1, and unclear figures (e.g., missing axis ranges in Figure 1). Overall, the work requires significant improvement.
Methods And Evaluation Criteria: Lines 51–52 state: "masks out part of the image information in the mixin layer to prevent over-reliance on this path," raising the following questions:
- How is the masked information selected?
- Since the approach first introduces image context and then applies random dropout, doesn’t this introduce inefficiency? Was this design choice specifically considered?
- Why mask the cross-attention path (MixinLayer) while treating the autoregressive path as the primary method for incorporating image context, rather than the reverse?
Theoretical Claims: - It is unclear how Findings 1–3 in Section 2 are derived.
- The main claims are questionable; see the *Claims and Evidence* section above for details.
Experimental Designs Or Analyses: - Table 1 has too many missing values. For example, Qwen2-VL-2B explicitly reports an MMMU score of 41.1 in its paper, but the authors omit this unfavorable result.
- The comparison includes an outdated version of MiniCPM (V-2); the appropriate comparison for InternVL-2.5 should be MiniCPM V2.5 or V2.6.
- Too few baselines are considered. For instance, BLIP [2] and BLIP-2 [1], which are closely related, should be included.
Supplementary Material: - **B. Training Details**
- **D. Dataset Details**
Relation To Broader Scientific Literature: See below the ‘Essential References Noe Discussed’ section
Essential References Not Discussed: The following key literatures are missing:
[1] Li, J., Li, D., Savarese, S. and Hoi, S., 2023, July. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning (pp. 19730-19742). PMLR.
[2] Li, J., Li, D., Xiong, C., and Hoi, S. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In International conference on machine learning, pp. 12888–12900. PMLR, 2022.
Other Strengths And Weaknesses: Despite the weaknesses in theoretical claims, literature coverage, and evaluation results, the overall writing is poor and unpolished, resulting in broken logic and a lack of clarity.
Other Comments Or Suggestions: - Figure 1 is difficult to interpret without axis scales.
- If the Mixin layer is essentially cross-attention, renaming it adds unnecessary complexity.
- **Typo:** l.137 – *"tow pathways"* → *"two pathways"*
- **Missing reference:** l.155 – *"As shown in 5"* should specify the correct figure number.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: `Q1: CoMemo is essentially a hybrid of LLaVA and cross-attention (Flamingo-style). `
First, neither open-source nor closed-source models currently offer such an architecture. As Reviewer vTuL noted, "It is exciting to see an effective solution that leverages the advantages of both approaches." Furthermore, in L45-46 of our paper, we emphasize that a naive combination of these two structures would fail. It is precisely the novel techniques we introduce—beyond simple hybridization—that differentiate CoMemo from merely hybrid cross-attention and LLaVA framework.
`Q2: The same architecture was introduced earlier, e.g., Q-Former in BLIP-2.`
The most fundamental reason is that the Q-Former is essentially a projector. In BLIP, the representations between text tokens and image tokens are still updated via self-attention, which differs significantly from CoMemo’s approach of directly using cross-attention for representation updates. Therefore, BLIP is not orthogonal to LVLM-S—in essence, it is a special case of the LVLM-S architecture.
`Q3: The authors claim to be the first to extend RoPE to 2D, but this is incorrect. M-RoPE means no additional computational cost.`
This comment misinterprets our claim and overlooks an important technical distinction. As stated in L26–L32 and L82–L84 of our paper, we introduce a RoPE-based 2D encoding scheme specifically designed for the DHR (Dynamic High-Resolution) technique.
Key differences from M-RoPE are as follows:
1. LLM Training Requirements: M-RoPE operates in 3D space, requiring dedicated training. In contrast, our RoPE-2D uses 1D computation, directly reusing the original RoPE from the LLM without additional alignment training.
2. Resolution Schemes: M-RoPE was designed for Naive Dynamic Resolution, not DHR. In Qwen-VL, RoPE replaces ViT’s absolute position encoding to support varying resolutions, but this requires extra training.
3. Additional RoPE Cache Computation: M-RoPE’s 3D RoPE requires cache computation for each dimension, which adds complexity.
`Q4: Missing values in Table 1.`
This table is only intended to show that our experimental data is not from a small-scale dataset. Additionally, we have now completed the [most missing values](https://drive.google.com/file/d/1jCKPKPMcAjnJc_1FCwqQsSN9W9luK7qP/view?usp=sharing).
`Q5: Qwen2-VL-2B explicitly reports an MMMU score of 41.1 in its paper, but the authors omit this unfavorable result.`
The benchmark we evaluated does not include MMMU.
`Q6: The appropriate comparison for InternVL-2.5 should be MiniCPM V2.5 or V2.6.`
Since the CoMemo model in our main experiments is at the 2B scale, and MiniCPM V2.5 and V2.6 are 7B-scale LVLMs, we chose to compare with MiniCPM-V-2, which is closer in model size.
`Q7: Too few baselines are considered. For instance, BLIP [2] and BLIP-2 [1], which are closely related, should be included.`
We selected the two baselines because they are currently the only two mainstream architectures in this field. As for why we did not choose BLIP as a baseline, there are two main reasons:
1. The BLIP architecture can be considered a refined form of the LLaVA-like architecture and should essentially be covered under the LVLM-S framework.
2. Previous studies have shown that, with the same data scale, the performance of Q-Former in BLIP is inferior to that of MLP[1, 2]. In BLIP-3, Q-Former is no longer used[3].
[1] DeCo: Decoupling Token Compression from Semantic Abstraction in Multimodal Large Language Models
[2] Honeybee: Locality-enhanced Projector for Multimodal LLM
[3] xGen-MM (BLIP-3): A Family of Open Large Multimodal Models
`Q8: Questions about mask during pretrain.`
`Q8.1: How is the masked information selected?`
We use a random masking strategy (L261–267) to remove image information by masking image tokens post-tokenization.
`Q8.2: Since the approach first introduces image context and then applies random dropout, doesn’t this introduce inefficiency?`
L186–189 clarify that the Memory Path and Context Path reuse the same image representations, incurring no extra computation. The minor latency from dropout (masking) in the Memory Path is negligible.
`Q8.3: Why mask the cross-attention path while treating the autoregressive path as the primary method?`
In L13–20, we briefly introduced and cited findings from Idefics-2, indicating that under the same parameter count and training data, the LVLM-S approach outperforms LVLM-X.
In Section 2.3 of the paper, we further conducted an ablation study on this. As shown in Figure 5, as the number of pretraining steps increases, the model exhibits stronger reliance on the cross-attention path, yet this leads to suboptimal performance. | Summary: The paper introduces CoMemo to improve visual information retention in multimodal tasks. Specifically, CoMemo includes a memory path for image tokens that operates independently of the main text path. This helps prevent visual information loss during long-context reasoning. Then CoMemo uses RoPE-2D encoding that maintains 2D spatial relationships in high-resolution images. This reduces performance degradation in long sequences. Finally, CoMemo uses the memory mixin strategy during training to ensure both the context path and memory path contribute effectively.
Claims And Evidence: This paper is well-written, systematically introducing the limitations of existing models before presenting effective solutions. For instance, to illustrate the "lost in the middle" problem, the authors provide clear evidence through gradient heatmaps and evaluation results on long-context benchmarks, demonstrating CoMemo's improved performance over LVLM-X and LVLM-S.
Methods And Evaluation Criteria: The evaluation process used in this paper follows a popular setup.
Theoretical Claims: The theoretical claims presented in the paper are solid.
Experimental Designs Or Analyses: The experimental setup is reasonable and the authors give sufficient analysis.
Supplementary Material: I reviewed the supplementary material.
Relation To Broader Scientific Literature: The authors introduce their proposed method by addressing three key challenges: the "lost in the middle" phenomenon, "remote decay in dynamic high-resolution models", and "the balance between two pathways." The "lost in the middle" issue has been identified in previous studies [1,2], while "remote decay in dynamic high-resolution models" is a well-recognized challenge in the field.
[1] Liu, N. F., Lin, K., Hewitt, J., Paranjape, A., Bevilacqua,M., Petroni, F., and Liang, P. Lost in the middle: How language models use long contexts.
[2] Song, D., Chen, S., Chen, G. H., Yu, F., Wan, X., and Wang, B. Milebench: Benchmarking mllms in long context.
Essential References Not Discussed: The references are comprehensive and well-organized.
Other Strengths And Weaknesses: * RoPE-2D’s compression may degrade performance in fine-grained tasks like OCR.
* The dual-path architecture and RoPE-2D introduce additional computational overhead, potentially impacting real-time applications.
Other Comments Or Suggestions: “2.3. The balance between tow pathways” -》 “2.3. The balance between two pathways”
Questions For Authors: * RoPE-2D’s compression may degrade performance in fine-grained tasks like OCR. This doesn't seem very reasonable. Does the authors have any opinion on it?
* The authors show that LVLMs have the "lost in the middle" phenomenon with gradient and attention weights in Fig 3. I would like to know if the proposed method truly improves these aspects and if it can provide clearer visual results.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: `Q1: The dual-path architecture and RoPE-2D introduce additional computational overhead, potentially impacting real-time applications.`
While CoMemo introduces some computational overhead, we have designed mechanisms to minimize latency, making the time difference between CoMemo and LVLM-S nearly negligible.
1. Shared image token representation: Both the memory and context paths share the same image token representations, so the ViT image encoding is performed once without adding latency.
2. Fewer cross-attention layers: Unlike Flamingo and MLLama-3.2, which insert cross-attention after each transformer block, we use a 4:1 ratio, inserting only 6 cross-attention layers in InternLM2 (24 layers in total). Cross-attention computes interactions only between input tokens and image tokens, reducing computation since the image token sequence is shorter than the input sequence.
3. No KV-cache required for cross-attention: Cross-attention eliminates the need for key-value caching during inference. Decoding only requires computing the query for the current token, whereas self-attention involves additional memory for KV-cache and quadratic time complexity (O(N²)).
As shown in [Table 4](https://drive.google.com/file/d/1hcnKbAzDiZnCSX9GFJY2l2jObVkDYcDa/view?usp=sharing) and [Table 5](https://drive.google.com/file/d/1MPYN2gf7sCnyZA8os0DGaSsDidYKqxX2/view?usp=sharing), we report training and inference efficiency across different architectures. The results confirm that CoMemo has nearly the same latency as LVLM-S. Although LVLM-X achieves higher efficiency due to using fewer image tokens, its performance is significantly weaker than both CoMemo and LVLM-S.
`Q2: RoPE-2D’s compression may degrade performance in fine-grained tasks like OCR.`
This is an important point, and we did not provide a detailed discussion in the original manuscript. The compression in RoPE-2D is a trade-off between improving long-context and long-generation tasks while potentially degrading performance on fine-grained tasks like OCR.
We propose a variant, RoPE-2D (no-overlap), which removes the compression. This version shows consistent improvements across various benchmarks, including OCR tasks.
Why does RoPE-2D affect OCR performance?
1. OCR is a fine-grained visual task that requires detailed information. The compression of positional information in RoPE-2D can reduce performance on tasks needing high resolution.
2. In OCR evaluations, excessive compression may occur, especially when the dynamic number for DHR (set based on InternVL and VLMEvalKit) is large. For example, in ChartQA, the compressed positional encoding causes 3328 image tokens to correspond to only 256 position IDs.
What if we map positional relationships without compression?
We explored RoPE-2D (no-overlap), where:
1. Subimage position IDs are mapped to thumbnail patch positions.
2. Position IDs accumulate based on the number of mapped patch tokens.
In this variant, position IDs are incrementally assigned, ensuring uniqueness across subimages. As shown in [Table 2](https://drive.google.com/file/d/1lXmFL1Nl0YYsIu6FStLt7v7VjMFUm1f5/view?usp=sharing), this approach improves all evaluation metrics compared to the baseline. However, it performs worse on long-context and generation tasks compared to the compressed RoPE-2D version.
We will include a more detailed analysis of RoPE-2D in the next version.
`Q3: The authors show that LVLMs have the "lost in the middle" phenomenon with gradient and attention weights in Fig 3. I would like to know if the proposed method truly improves these aspects and if it can provide clearer visual results.`
Thank you for this insightful question. We analyze the “lost in the middle” phenomenon from two perspectives and evaluate CoMemo’s improvements in these areas:
1. Attention Weights: CoMemo retains the self-attention mechanism from LVLM-S, so the image attention remains largely unchanged. However, CoMemo introduces an additional cross-attention mechanism, allowing input tokens to attend to image tokens, providing explicit visual grounding. As discussed in Section 2.3, the average gate value quantifies the strength of this visual attention, which is unaffected by the input context and thus does not suffer from the “lost in the middle” issue.
2. Image Token Gradients: In [Figure 1](https://drive.google.com/file/d/1gMQOcUC4qkBur4RVMLw6PbDCUC8JtQt0/view?usp=sharing), we compare the average gradients of image tokens during inference between LVLM-S and CoMemo across different benchmarks. We compute the gradients of output logits with respect to input image tokens, take the absolute value, and average across all image tokens. The results show that CoMemo significantly strengthens visual grounding. On the MMNIAH benchmark, where the “lost in the middle” issue is prominent, image token gradients in CoMemo nearly double compared to LVLM-S, clearly indicating that CoMemo mitigates this problem.
We will correct the typos issue in the next version. | Summary: This paper thoroughly investigates the flaws of LLM architectures when processing multimodal inputs, including the progressive neglect of central visual content as context expands and the failure of conventional positional encoding schemes in preserving 2D structures. To address these issues, this paper presents CoMemo to decouple the memory path for mitigating visual neglect and RoPE-2D to maintain 2D spatial awareness. The experimental results indicate the effectiveness of the proposed methods in multiple benchmarks.
###update after the rebuttal#####
Thanks to the authors for responding to my arguments in this paper. The newly added experimental results in the provided link are sufficient. It addressed my concerns. I tend to raise my score.
############################
Claims And Evidence: The claims are reasonable.
Methods And Evaluation Criteria: The employed benchmarks are comprehensive and reasonable for evaluating performance.
Theoretical Claims: NA
Experimental Designs Or Analyses: The authors only employ a small-scale model, i.e., InternLM-1.8B, to conduct experiments, lacking the verification of the universality of the proposed schemes. The authors are advised to verify the effectiveness across multiple architectures to ensure reliability.
Supplementary Material: NA
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: NA
Other Strengths And Weaknesses: This paper is well-organized, and the listed key findings are insightful.
Other Comments Or Suggestions: Missing a space between words in the abstract part.
Please pay attention to keeping consistent decimal places in experimental results.
Questions For Authors: 1. In Table 1, the reviewer noticed that lots of results are missing for the majority of the models. What is the reason?
2. The performance of CoMemo still has significant gaps compared with other models on certain benchmarks. What is the reason?
3. Can the authors provide more explicit analyses and visualizations to indicate the effective mechanism of the proposed methods?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your recognition of our work and your positive comments regarding the well-orgnized and insightful findings of our paper.
Below, we will address your concerns point by point, and all suggested revisions will be incorporated into the next version of the manuscript. If our responses adequately address your concerns, we would sincerely appreciate it if you could consider adjusting your evaluation score. Thank you again for your time and thoughtful review.
`Q1: The performance of CoMemo still has significant gaps compared with other models on certain benchmarks. What is the reason?`
The main reason for CoMemo's performance gap compared to other models is the training data. As seen in the evolution of open-source models like InternVL, QwenVL, and LLaVA, architectural improvements were largely driven by data strategies, rather than changes in architecture. Differences in training data can obscure architectural distinctions, complicating the selection of the optimal model. As discussed in Section 4.2, the goal of this paper is to conduct an ablation study on model architectures using the same dataset, not to maximize absolute performance. Comparing with SOTA models highlights that our training dataset is not based on toy datasets, demonstrating the generalization ability of our results.
`Q2: In Table 1, the reviewer noticed that lots of results are missing for the majority of the models. What is the reason?`
We apologize for the confusion caused by the missing values in Table 1. Due to time constraints, we were unable to conduct a comprehensive evaluation, but we ensured that most benchmarks have at least three or more models for comparison in this [url](https://drive.google.com/file/d/1jCKPKPMcAjnJc_1FCwqQsSN9W9luK7qP/view?usp=sharing). Additionally, we will consider including a comparison of different models at the 8B scale in the next version.
`Q3: The authors only employ a small-scale model, i.e., InternLM-1.8B, to conduct experiments, lacking the verification of the universality of the proposed schemes.`
Thank you for your constructive and insightful suggestions. We agree that evaluating our framework at larger scales can better demonstrate its effectiveness. Therefore, we have conducted additional experiments at the 7B scale see [Table 3](https://drive.google.com/file/d/1O6jIbebOjUmgPlfTAa1ytDVRuY_1IDTt/view?usp=sharing).
For the 7B-scale model, we adopted InternLM2-7B as the language backbone and InternViT-300M as the visual encoder. The results at this scale are largely consistent with those observed in the 2B experiments, further validating that our architecture follows the scaling law. As shown, performance improved across most vision tasks at the 7B scale. However, we observed a slight drop in performance on OCR-related tasks, which we attribute to the compression characteristics of our RoPE-2D positional encoding.
To address this issue, we also propose a variant called RoPE-2D (no-overlap). Our experiments at the 2B scale show that this variant provides more stable improvements across different tasks in [Table 2](https://drive.google.com/file/d/1lXmFL1Nl0YYsIu6FStLt7v7VjMFUm1f5/view?usp=sharing). Due to time and resource constraints—training a 7B-scale model on our dataset requires nearly two days with 128 A100 GPUs—we were unable to explore more positional encoding strategies or conduct further large-scale experiments at this time. However, we plan to include more extensive evaluations to further demonstrate the generalizability of our method in the next version.
`Q4: Can the authors provide more explicit analyses and visualizations to indicate the effective mechanism of the proposed methods?`
In [Figure 1](https://drive.google.com/file/d/1gMQOcUC4qkBur4RVMLw6PbDCUC8JtQt0/view?usp=sharing), we compare the average gradients of input image tokens between LVLM-S and CoMemo during inference across different benchmarks. Specifically, we compute the gradients with respect to the input tokens based on the logits corresponding to the model's response token IDs, then extract the image tokens from the indexed results. Since we only consider the magnitude of influence, we take the absolute value of the computed gradients before averaging across all image tokens to obtain the final result.
The comparison shows that through architectural adjustments and positional encoding modifications, the model's responses indeed demonstrate stronger focus on visual information, which verifies our mitigation of the image neglect problem.
`Q5: Missing a space between words in the abstract part. Please pay attention to keeping consistent decimal places in experimental results.`
Thank you for pointing out the formatting issues in our paper. We will address these problems and make the necessary corrections in the next version. | Summary: This paper introduces CoMemo, provides two key design choices into MLLMs, 1) adding additional cross-attn like Flamingo, besides the origianl llava-style approach, but to mask part of the info to prevent over-reliance 2) add 2d rope to LLM backbone for image features. The approach ourperforms llava-style and flamingo-style. The author also provides detailed study into MLLMs: 1) study of the lost in the middle phenomenon 2) The remote decay of rope 3) the balance between two pathways
Claims And Evidence: See below
Methods And Evaluation Criteria: See below
Theoretical Claims: See below
Experimental Designs Or Analyses: See below
Supplementary Material: No
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The writing is very unclear, and it's a bad experience to read this paper, an example is that the authors mentioned attn_gate and ffw_gate at Section 2.3, but first introduce it in 3.2. The writing of Balance in DHR Allocation is also unclear, the author should point out that 1k, 2k, 4k means steps. and I still dont know what is the detailed calculation of gates avg. Algo1 should be format using a smaller font size, now the line-break is very weird.
The experiments are solid, and the results are promising, the prosoed methods consistently outperforms -X and -S variants, this paper provides a promising solution to integrate flamingo and llava style MLLMs.
Other Comments Or Suggestions: There is a long standing debate between the flamingo-style and llava-style, It's very exciting to see a good solution to leverage both advantages. The experiments results are good, while the paper is not ready, and the writing should be improved. I think with major revision, this paper can be a very good and impactful work, but the current submission is not there yet.
Questions For Authors: To test free-form responses, the author can also test more populat benchmark like mm-vet, can authors provide justifications on this point.
It seems the rope-2d is orthogonal to the -S variant, can CoMemo still outperforms -S variant in Table 2, Table3 and Table4 if applying rope-2d to -S variant.
It seems the rope-2d is harmful to OCR in Table5, also in Table 4, CoMemo is worse than -S variant, Could you provide some justification on this point?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback to improve our work's clarity. As you rightly pointed out, proposing a framework that combines Flamingo-style and LLaVA-style approaches represents a highly impactful contribution. Thank you for recognizing the value of our work.
Below, we address your concerns point by point, and all suggested revisions will be incorporated in the next version. If you feel we have adequately addressed your feedback, we would be most grateful if you could consider adjusting your evaluation score accordingly. Thank you for your time and consideration.
`Q1: I still dont know what is the detailed calculation of gates avg.`
In L143 of the original manuscript, we have already mentioned how the avg. gates are calculated:
> Our analysis averages the attn_gate and ffw_gate values to quantify pathway preference.
Specifically, the average gates value is obtained by calculating the mean of the absolute values of the gate values from each mixin layer, corresponding to the data shown in the line chart in Figure 5.
`Q2: Authors mentioned attn_gate and ffw_gate at Section 2.3, but first introduce it in 3.2.`
This is because attn_gate and ffw_gate are not novel concepts introduced in our paper. However, we believe that adding citations when first mentioning these terms, or restructuring the content could significantly reduce potential confusion.
`Q3: The writing of Balance in DHR Allocation is also unclear, the author should point out that 1k, 2k, 4k means steps.`
They appear in Figure 5, as we stated in the caption of Figure 5:
> pretrained checkpoint corresponding to the x-axis.
The values 1k, 2k, and 4k indeed denote to the training steps during pretraining.
`Q4: Algo1's format`
We apologize for the poor reading experience caused by the formatting issue. In the next version, we will adjust the layout to alleviate the impact of the line-breaks on readability.
`Q5: The author can also test more populat benchmark like mm-vet`
Sure! As evident from the [Table 4](https://drive.google.com/file/d/1_OsHi8-Ahy1YhaYpS6NK98q7ZOE-N0QM/view?usp=sharing), since MMVet is a small-scale evaluation set (200 Q&A pairs), CoMemo does not significantly outperform LVLM-S in overall score. However, it still demonstrates superiority in certain dimensions emphasizing visual perception (Recognition/Spatial/Math).
`Q6: It seems the rope-2d is harmful to OCR in Table5, Could you provide some justification on this point?`
We have previously addressed this trade-off in the manuscript (L361-367, L392-394). However, we now propose an improved RoPE-2D(no-overlaps) variant that achieves more balanced improvements across all capabilities.
Why RoPE-2D affects OCR performance? The performance trade-off occurs because we compress positional encoding IDs to mitigate the remote decay problem. While this compression significantly benefits tasks that don't require fine-grained image information, it inevitably reduces information richness for OCR tasks. For instance, in ChartQA evaluations with a maximum number of 12, our compression maps 3,328 image tokens to only 256 RoPE position IDs.
The computation of RoPE-2D (no-overlaps) involves the following two steps:
1. Subimage position IDs are mapped to their corresponding thumbnail patch positions.
(In the initial version of RoPE-2D, all subimage position IDs matched their corresponding thumbnail patch IDs. Here, however, they are sequentially incremented starting from the thumbnail patch IDs.)
2. Thumbnail position IDs accumulate based on the number of mapped patch tokens
As shown in [Table 2](https://drive.google.com/file/d/1lXmFL1Nl0YYsIu6FStLt7v7VjMFUm1f5/view?usp=sharing), this approach demonstrates improvements across all evaluation metrics compared to the baseline (LVLM-S vs CoMemo+RoPE2D(no-overlaps)). However, it shows significantly weaker performance on long-context and generation tasks compared to the compressed RoPE-2D version.
`Q7: Can CoMemo still outperforms -S variant if applying rope-2d to -S variant.`
Yes, RoPE-2D can indeed be applied to the -S variant.
As shown in Table 2, our 2D positional encoding reconstruction enhances LVLM-S's performance across multiple capabilities: multi-image processing, general VQA, and image captioning tasks. This approach also improves long-context understanding and extended text generation by mitigating the remote decay issue through compressed positional encoding. However, as previously noted, we observed some degradation in OCR performance. Due to time constraints, we were unable to conduct experiments with the RoPE-2D(no-overlaps) variant. Based on the consistent improvements demonstrated in Table 2, we anticipate this variant would similarly provide stable performance gains across various capabilities.
We appreciate the constructive comments from the reviewer regarding RoPE-2D. We believe that these experiments and analyses will help followers gain a more comprehensive understanding of RoPE-2D.
---
Rebuttal Comment 1.1:
Comment: Thank you for authors response, I apologize for time reason I do not cover all details in the paper earlier. My concern has been resolved and I will update my score to 3. Hope authors can polish the writing before camera ready for better reading experience.
---
Reply to Comment 1.1.1:
Comment: Thanks for raising the scores! We have polished the areas that were unclear as mentioned in the reviewer comments. We will also add the additional experiments from the rebuttal stage to the next version and continue refining the text, which will help clarify the motivation and strengths of our method. | null | null | null | null | null | null |
Automatic Reward Shaping from Confounded Offline Data | Accept (poster) | Summary: The paper aims to automate reward shaping when learning policies online via Reinforcement Learning (RL). Authors propose an automated approach for designing reward functions utilizing previously collected offline samples with unobserved confounders. State value upper bounds are calculated and used as a conservative optimistic estimation of the optimal state value function. These estimates are then plugged in to the Potential-Based formulation. Empirical results on toy tasks demonstrate that shaping function provides a better regret bound.
Claims And Evidence: Please refer to strengths and weaknesses.
Methods And Evaluation Criteria: Please refer to strengths and weaknesses.
Theoretical Claims: Please refer to strengths and weaknesses.
Experimental Designs Or Analyses: Please refer to strengths and weaknesses.
Supplementary Material: Yes, the appendix.
Relation To Broader Scientific Literature: Please refer to strengths and weaknesses.
Essential References Not Discussed: Please refer to strengths and weaknesses.
Other Strengths And Weaknesses: ### Strengths
* The paper is theoretically rich and detailed in its explanations.
* The paper studies an important problem in the online setting, often overlooked by the RL community.
### Weaknesses
* **Empirical Evaluation:** While the paper makes a theoretical contribution, authors have provided empirical experiments to support the regret bound. My main concern is the empirical evaluation carried out on gridworld tasks which present a lower number of confounding states. State spaces for these tasks are countable and thus, it is non-trivial to study the effects of confounders in such a setting. Could the authors provide if they inject any noise in the data distribution? How much confounding does the agent encounter on average? It would be worthwhile to inspect the shaping function on varying grid sizes and under different settings of robustness, i.e- different start states or a larger challenging setting.
* **Comparison to Q-UCB:** I am struggling to understand the differences and contributions of the shaping scheme when compared to Q-UCB. Primarily, authors introduce zero initialization of Q-values, UCB bonus based on potential function and shaped reward during the updates. Could the authors explain their reasoning behind these choices? Also, could a comparison between the causal shaping function and Q-UCB be provided? In its current form, it is difficult to understand the improvements of utilizing PBRS.
* **Overall Presentation and Writing:** The presentation and writing quality of the paper could be further improved. Various portions of sections 2 and 3 could be made more intuitive. Theoretical explanations following theorems and corollaries could be made more concise and clear. Plots corresponding to experiments could be enlarged and reduced in filesize as the paper takes a long time to load. Grammatical errors could be reduced. A few are mentioned below.
Other Comments Or Suggestions: ### Minors
* line 85: levering $\rightarrow$ leveraging
* line 97: denote the their $\rightarrow$ denote their
* line 210: optimal optimal $\rightarrow$ optimal
* line 210: state function $\rightarrow$ state value function
* line 216: form $\rightarrow$ from
Questions For Authors: Please refer to strengths and weaknesses.
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed feedback and appreciate your recognition of the significance of our work. We have addressed each of your concerns in the responses below.
> Weakness 1: “Could the authors provide if they inject any noise in the data distribution? How much confounding does the agent encounter on average? It would be worthwhile to inspect the shaping function on varying grid sizes and under different settings of robustness, i.e- different start states or a larger challenging setting.”
This paper investigates reward shaping using confounded observational data and demonstrates how the learned shaping function can accelerate future online learning processes. Like many online reinforcement learning (RL) algorithms, the performance guarantee of our proposed method relies on the assumption that the measured variables are discrete (as in MDPs). To our knowledge, the evaluation environments we used are comparable to those in existing studies on online RL and reward shaping in terms of state space and complexity.
Although the discrete state space may seem simple, reward shaping remains significantly challenging when unobserved confounding is present or cannot be assumed away. For instance, in our simulation shown in Figure 4(b), naively learning shaping functions from confounded observations, while ignoring the confounders, often fails to capture valuable information, resulting in inconsistent improvements in the performance of online RL algorithms. In contrast, our proposed causal approach achieves an order-of-magnitude improvement.
In our current experimental setup, uniform random starting positions are used to sample offline data from the environment. We acknowledge that in a large state space with limited starting locations, the offline data may not sufficiently explore the state space, resulting in optimistic estimates for those under-explored states. However, this optimistic estimation is the most reasonable guess at the optimal values of those states, as no other information or assumptions can rule out the possibility that these under-explored states may be highly rewarding.
> Weakness 2: “Could the authors explain their reasoning behind these choices? Also, could a comparison between the causal shaping function and Q-UCB be provided?”
Thank you for the opportunity to clarify this issue! We have conducted experiments on vanilla Q-UCB, which corresponds to the “No Shaping” baseline in Fig. 4. In the revised version, we have updated the experiment legend to use the term “vanilla Q-UCB.”
Regarding the design choices we made in Q-UCB Shaping:
1. **Zero Initialization of Q-values**: Zero initialization is used because when applying a shaping function that serves as an upper bound of the optimal state value, the optimal state values after shaping are shifted downward by the magnitude of the shaping function, which is now at most zero.
2. **Modification of the UCB Bonus**: We modify the UCB bonus to use potential functions because the standard UCB bonus applies an overly optimistic estimate (i.e., number of steps times the maximum reward per step). Since we now have a better upper bound, using it helps reduce overestimation.
3. **Shaped Reward During Updates**: Shaped rewards are used during updates to incorporate additional information extracted from offline data, thereby enhancing the online learning process.
> Weakness 3: “Plots corresponding to experiments could be enlarged and reduced in file size as the paper takes a long time to load. Grammatical errors could be reduced.”
We appreciate the reviewer’s careful reading of our work. We have corrected the typos, reduced image file sizes, adjusted figure sizes, and further polished the language in the revised version. We will upload it once the portal opens.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response to my comments. My concerns have been addressed and I would like to keep my current score. I thank the authors for their efforts.
---
Reply to Comment 1.1.1:
Comment: Thanks for the quick updates! We would like to answer further questions if there are other major blockers in raising the score. And as always, we highly appreciate your time and consideration! | Summary: This paper addresses the challenge of designing reward shaping functions in reinforcement learning, particularly in the context of Potential-Based Reward Shaping, which traditionally relies on manually crafted potential functions. To overcome this limitation, the authors propose a data-driven approach that automatically learns reward shaping functions from confounded offline data using causal state value upper bounds. The study focuses on Confounded Markov Decision Processes, where unobserved confounders exist at each decision step, making standard offline reinforcement learning methods unreliable. The paper introduces a model-free UCB-based algorithm that leverages derived shaping functions to improve learning efficiency and provides the first theoretical gap-dependent regret bound for PBRS in model-free settings. Finally, empirical results support the theoretical findings.
Claims And Evidence: The paper claims that Q-UCB Shaping improves sample efficiency, enables convergence to the optimal interventional policy, and provides theoretical guarantees for its causal upper bounds. These claims are partially well-supported.
The causal Bellman bounds provide a solid theoretical foundation, making the claim about upper bounding optimal state values convincing. The empirical results show better performance compared to baselines in confounded environments, supporting claims of improved sample efficiency.
However, convergence to the optimal interventional policy is assumed rather than directly analyzed. Additional policy divergence or optimality gap analyses would strengthen this claim.
Methods And Evaluation Criteria: The use of CMDPs and causal bounds is well-motivated for the problem of learning from confounded offline datasets. The choice of MiniGrid windy environments is reasonable for demonstrating confounding effects.
However, the limited scale of the environments makes it unclear how the method generalizes to high-dimensional settings.
The paper does not compare against stronger offline RL baselines, such as BCQ, CQL, or IRM-based methods, which would provide a better context for the performance gains.
Theoretical Claims: The paper presents three key theoretical results: (1) the Causal Bellman Optimality Equation (Theorem 3.1), which derives an upper bound for the optimal value function in CMDPs, (2) the Unified Causal Optimal Upper Bound (Corollary 3.2), which extends the bound when multiple offline datasets are available, and (3) the Regret Bound for Q-UCB Shaping (Theorem 4.5), showing improved regret bounds. While these proofs appear correct, they have some limitations.
1. They assume a tight upper bound on state values, which may not always hold in practice.
2. They do not explicitly analyze the impact of latent variable shifts under partial observability.
Experimental Designs Or Analyses: The experiment settings clearly simulate confounding effects in offline reinforcement learning, and the comparison against multiple reward shaping strategies is insightful.
However, the environments are relatively small-scale, limiting their real-world applicability.
Additionally, the study would be more compelling if it included experiments with more complex confounding structures, such as hierarchical or multi-agent confounders.
Supplementary Material: I briefly reviewed the supplementary material but did not closely examine the theoretical proofs.
Relation To Broader Scientific Literature: The paper connects to:
1. Offline RL: The method is related to batch-constrained RL and robust RL, but does not compare against conservative Q-learning or implicit Q-learning.
2. Exploration via UCB: The paper applies UCB exploration in CMDPs, but prior works (e.g., bandit-based causal RL) should be discussed to highlight novelty.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Strengths**
1. The writing is clear and well-structured.
2. The introduction of causal Bellman optimality bounds is a valuable contribution.
3. Combining UCB-based exploration with causal bounds is an interesting and underexplored direction.
**Weaknesses**
1. The approach is tested in small grid-worlds but not in larger RL settings.
2. The paper does not compare against state-of-the-art offline RL methods, such as CQL, BCQ or IQL.
Other Comments Or Suggestions: I acknowledge that there are aspects of this paper that I am not fully familiar with, and I may not be a fully qualified reviewer for this topic. However, I am very willing to discuss the work with the authors.
Questions For Authors: 1. Can you provide empirical evidence showing how close the learned policy is to the optimal interventional policy? Policy divergence metrics would help clarify this.
2. How well does the method scale to continuous or large state-action spaces? Have you considered testing in Mujoco or other RL benchmarks?
3. How does the performance change when the offline dataset is sparse or comes from suboptimal behavior policies?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We are grateful for your detailed comments. We have addressed your concerns in the sequel.
> Optimality of the learned policy & Question 1
While we adopt the regret analysis framework, we acknowledge that it offers only a weaker guarantee of convergence to the optimal policy. A PAC framework may be more suitable for optimality analysis. For empirical evidence of the learned policy quality, we have visualizations in the experiment section (Fig. 5). Since the learned policy is deterministic, we further report the ratio of states where the learned policy is optimal, averaged over three seeds with values closer to 1 indicating better performance.
algo/env|Windy Empty World|LavaCross(easy)|LavaCross(hard)|LavaCross(maze)
-|-|-|-|-
Vanilla Q-UCB| .0 | .0 | .0 | .03
Q-UCB Shaping (Ours)|.76|**.70**|**.81**|**.90**
Shaping w/ Min. Val.|.72 |.49 | .30 | .24
Shaping w/ Max Val.|.73 |.52 | .59 | .33
Shaping w/ Avg. Val.|.74 |.50 | .49 | .24
BCQ| .79|.50|.67|.50
Shaping + BCQ| **.88**|.31|.22|.03
> Applicability to high-dimensional and continuous envs & Weakness 1 & Question 2
Thanks for the question! You can refer to the first part of our response to reviewer dtkk.
> Tight upper bounds
Our proposed method is data-driven, so the learned bound in practice may not be tight. Bound tightness can be improved when multiple datasets are available by taking the minimum over those. Theoretically, the tightness is determined by the natural bounds, without requiring further assumptions about the environment. To illustrate this, we provide a simpler example involving bandits to demonstrate how the bound on rewards can be tight. Our proposed bounds in Equations 5 and 6 are conditional extensions of this idea.
Example: Consider a confounded MAB where reward $Y$ depends on action $X$ and they are confounded by a variable $U$. All variables are binary. The natural bound indicates that the distribution of the reward under interventions can be bounded as $P(y|do(x))=P(x,y)+\sum_u P(y|x,u)(P(u)-P(u,x))\leq P(x,y)+\sum_uP(u)-P(u,x) = P(x,y)+1-P(x)$. Equality is achieved and the bound is tight when $u=0,1,P(y|x,u)=1$, which is possible if the reward is deterministic given an action.
> Latent variable distribution shifts
In this paper, we study the problem of reward shaping in confounded MDPs. To the best of our knowledge, this work is novel and extends reward shaping methods to address the practical challenges of confounders. We acknowledge that reward shaping in confounded POMDPs and under distribution shifts is a challenging and exciting research question. We appreciate the suggestion.
> More complex confounding biases structures
Since our work is built upon the framework of CMDPs, we do not impose additional constraints on how confounders may correlate with each other. As a result, our approach can handle hierarchical confounders when they fit within the causal diagrams of CMDPs. Exploring the multi-agent case is an interesting direction for future work, as this paper focuses on designing shaping functions for training single agents.
> Related works:
We have extended the discussion on both offline RL and causal bandits in the revised version. Briefly, many existing offline RL methods learn q-values without accounting for confounders. Some approaches involve learning initial values offline and then fine-tuning them online, but this offline initialization often diminishes during online learning. Our proposed method addresses confounders in the offline data and preserves the extracted knowledge during the online phase through reward shaping.
Regarding causal bandits, our approach shares the same motivation of using causal bounds to address confounding biases and reduce the regret. However, we extend this idea by combining bounds with UCB exploration in sequential domains and applying it to the optimal Bellman equation.
> Weakness 2: Compare with SOTA offline baselines
We have added experiments to the revised version comparing our method with BCQ, and we summarize the state ratio with optimal policies in the table provided in the first part of our response. We train BCQ using the same mixed confounded datasets as our method’s inputs. We evaluate the performance of the policy extracted directly from BCQ q-values, and the performance of Q-UCB using BCQ values as shaping functions. Our method outperforms both BCQ baselines in non-empty windy grid worlds, where confounder information is essential for completing the tasks.
> Question 3: performance change under imperfect offline data
In the revised version, we provide performance results when the good behavioral policy is excluded. To summarize, the final regrets remain nearly unchanged [[img link]](https://ibb.co/album/JBYSZm). Because the bounds from optimal demonstrations are typically overly optimistic; When unifying such bounds with others from imperfect demonstrations, they are not selected by the min operator and thus have minor effects on the shaping functions. | Summary: This paper focuses on the automated design of a reward shaping function from offline data, potentially contaminated by unobserved confounding bias. The authors propose leveraging causal state value upper bounds from offline datasets as a conservative, optimistic estimate of the optimal state value, which is then used as a state potential in Potential-Based Reward Shaping (PBRS) to achieve a more favorable gap-dependent regret bound. Experimental results demonstrate the effectiveness of the proposed approach.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes. The proposed method demonstrates strong potential for supporting offline reinforcement learning with reward shaping, and the evaluation benchmark aligns well with the primary focused challenges.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: The expeimental environment support the fucused challenge well. The proposed method outperform baselines and the provided visualization of the policy make sense to me.
The only one concern is how would the proposed method in more realistic environments, like MuJoCo?
Supplementary Material: Yes, section A- E.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Strength**
- The paper is well written and easy to follow.
- The problem is interesting and important to me.
- The examples provided are very helpful for my understanding.
**Weakness**
However, I am somewhat concerned about the generality of the setting. As it is hard to say in the realistic task, how the confounders exist, like between state and action, or only action and reward. Could you please provide additional real-world examples to illustrate its broader applicability?
Furthermore, it would be beneficial if Figure 2 included the transition functions and a clearer demonstration of how the unobservable confounder operates, as this would enable a quicker and more thorough comprehension.
Lastly, I would like to confirm my understanding: the confounders can be viewed as hidden states of the environment. Consequently, not only must the behavior policy account for these confounders, but the learned policy should also incorporate them when making decisions. Is that right? Could you please also clarify the role of the confounders in decision-making and policy learning? Is it possible to learn the optimal policy without observing the confounders?
Other Comments Or Suggestions: In Line 210, it seems a copy-paste error here.
Questions For Authors: - Could you provide more real-world examples or case studies to illustrate the broader applicability of your proposed method, beyond the standard benchmark environments?
- In which scenarios are confounders critical for policy learning, and in which cases might they be less relevant or unnecessary? How should one identify and account for such confounders in practice? Can I regard it as one kind of Partitial Observation?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank you for your thoughtful feedback. We appreciate your acknowledgment of the significance of our work and have addressed your concerns in the sequel.
> “The only one concern is how would the proposed method in more realistic environments, like MuJoCo?”
In this paper, we primarily focus on learning shaping functions from observational data and using the learned functions to accelerate online learning processes. Like many online RL algorithms, we use UCB as our basic learning method. The theoretical guarantees of UCB’s performance require the state-action space to be discrete, which may not directly apply to continuous environments like MuJoCo. However, this is a common challenge for most online RL algorithms. Extending these to continuous domains is an exciting challenge, and we appreciate the suggestion.
That said, we would like to briefly explain how our proposed method can be extended to high-dimensional and continuous state/action domains. For each component in our causal bound – i.e., policy distribution, reward, transitions, and values – we can train neural networks to learn these from offline data. The primary challenge is that the max operator of the bound in high-dimensional and continuous spaces cannot be computed exactly, necessitating approximation. Specifically, we can train a separate neural network to select the next possible best state based on observational data and then extract values from the learned value networks. This is a natural extension of our contributions; however, we were unable to provide more formal results at this point. Thus, we leave these out of the current manuscript.
> “Could you provide more real-world examples or case studies to illustrate the broader applicability of your proposed method?”
Unobserved confounders naturally arise when decision-makers consider factors that a passive observer cannot access. For instance, when training autonomous vehicles, data is often collected from engineering prototypes equipped with costly laser radars. However, to reduce production costs, we may wish to train less expensive vehicles equipped only with cameras using this data. Here, radar information becomes an unobserved confounder for the learner. Similarly, in robotics, numerous factors influence action outcomes without being monitored, contributing to what is known as the “sim-to-real gap”, caused by unobserved confounding. For example, friction and instability in demonstrators’ joints may be unmonitored, affecting their behaviors. The method proposed in this work can be used to design denser reward signals when training robots or autonomous vehicles, using confounded data as such.
> “Furthermore, it would be beneficial if Figure 2 included the transition functions and a clearer demonstration of how the unobservable confounder operates, as this would enable a quicker and more thorough comprehension.”
In the robot walking example, the step size ($U_t$) required to stabilize its body (($F_t=0 \rightarrow F_{t+1}=1}$)) is an unobserved confounder. We will update the manuscript to clarify this issue. Thank you for the suggestion.
> “In which scenarios are confounders critical for policy learning, and in which cases might they be less relevant or unnecessary? How should one identify and account for such confounders in practice? Can I regard it as one kind of partial observation?”
Confounders can be taken as partial observations, except that behavioral policies may access some of them. Consequently, the learned policy should account for these – though, as we will explain next, there are cases where this may not be necessary. However, the unobserved confounders are not directly accessible to the learned policy and require special treatment, which is the objective of the causal bounds proposed in this work.
Regarding optimality, as demonstrated in our experiments, the learned policy proves to be a safer choice compared to one that aims to collect all coins and reach the goal location. This latter policy is the best possible outcome the agent could achieve without observing the wind. However, it is no longer optimal when compared to a behavioral agent with greater sensory capabilities. Generally, when unobserved confounders exist, it is understood in the literature (Pearl, 2009) that the optimal policy is under-determined by observational data (not identifiable) without additional experiments or assumptions.
Interestingly, the current models usually assume away the confounders, and not the other way around. In other words, our goal is not to learn the unobserved confounders, since they are unobserved, but to have methods that are robust and could protect against them (without having to measure them explicitly). We argue in this paper that it is hard to assume away confounders in many real-world domains and propose solutions to it. In a way, we are relaxing something baked in many methods in the literature.
> Typos
Thanks! We will fix it in the revised version. | null | null | null | null | null | null | null | null |
Universal Biological Sequence Reranking for Improved De Novo Peptide Sequencing | Accept (poster) | Summary: This paper presents RankNovo, a deep reranking framework for de novo peptide sequencing that integrates multiple sequencing models using a listwise reranking approach and axial attention to extract informative features. The introduction of PMD (Peptide Mass Deviation) and RMD (Residual Mass Deviation) provides fine-grained supervision, and extensive experiments show that RankNovo outperforms its base models while achieving state-of-the-art performance with strong zero-shot generalization. The paper is well-structured and presents a compelling methodology, but some areas should be improved.
Claims And Evidence: Yes, The manuscript presents a comprehensive set of experimental results that include comparisons with baseline models, conduct of ablation studies, and exploration of hyperparameter variations.
Methods And Evaluation Criteria: Yes, it does.
Theoretical Claims: Not a theory paper.
Experimental Designs Or Analyses: Table 1 presents the performance of **RankNovo** and baseline models on the **9-species-v1** and **9-species-v2** test sets.
Figure 3 illustrates the **zero-shot performance** of RankNovo.
Table 3 and Figure 4 show the results of the **ablation study**.
Supplementary Material: I have reviewed all of the supplementary material.
Relation To Broader Scientific Literature: The manuscript presents a clear and well-structured motivation,and and introduces two new metrics, PMD and RMD, to characterize quality differences at the peptide and residue levels.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths :
1. The paper is written very clearly and is easy to understand.
2. It achieves SOTA performance on two datasets.
3. The presentation is good.
Weaknesses:
1. Statistical Significance Analysis: While RankNovo outperforms other baselines as shown in Table 1, the paper does not mention whether the improvements are statistically significant. Including a significance analysis (e.g., t-tests or confidence intervals) would strengthen the reliability of the reported results.
2. Lack of Hyperparameter Experiments: Although the appendix provides a hyperparameter table, key parameters such as λ (and potentially others) should be analyzed through dedicated experiments to assess their impact on model performance. A sensitivity study would help determine how robust RankNovo is to different hyperparameter choices.
3. Computational Complexity and Time Efficiency: While RankNovo appears to be a promising approach, the paper lacks a detailed analysis of its computational cost. Given its reliance on multiple base models during training and inference, a discussion on computational complexity and runtime is essential to provide a more comprehensive understanding of its feasibility in real-world applications.
4. Clarification of Zero-shot Experiment Motivation: The zero-shot experimental setup needs clearer justification. It is unclear how this experiment directly relates to the main challenge mentioned in Lines 80–82, which describes the difficulty arising from the heterogeneous nature of mass spectrometry data. Is the zero-shot setting intended to evaluate the model's generalization ability? If so, should it be framed as an OOD (Out-of-Distribution) test instead? A clearer connection between the experimental design and the stated challenge would improve the overall coherence of the study.
Other Comments Or Suggestions: None
Questions For Authors: Please refer to Other Strengths And Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your detailed comments. We address your concerns as follows:
> A significance analysis is needed to strengthen the reliability of performance improvement.
Thank you for your suggestion. We agree that statistical significance analysis is essential for validating the reported improvements. In our main results, we presented only the mean statistical improvements without assessing their significance. ```To address this limitation, we conducted pairwise t-tests at the peptide level to evaluate metric improvements on the 9-species-v1 dataset.``` The t-values for the six base models are 82.4, 42.1, 38.6, 88.2, 43.2, and 50.7, respectively. For all models, the p-values are less than 1e-12. The results show that RankNovo's performance improvements over all base models are **statistically significant**:
- All comparisons yielded t-statistics of at least 38.6943 (based on total 90K samples)
- All p-values were below 1e-12, substantially lower than the conventional 0.05 threshold
These findings conclusively demonstrate that RankNovo achieves statistically significant improvements in both peptide recall and amino acid precision, providing stronger support for the conclusions presented in Section 4.2 (Main Results).
> key parameters such as λ (and potentially others) should be analyzed through dedicated experiments to assess their impact on model performance.
Thank you for your valuable suggestion. While we provided λ evaluations in **Section 4.4** and **Appendix D.3**, we agree more analysis strengthens our work.
We conducted additional experiments testing different λ values:
|λ|0|0.25|0.5|0.75|1|
|----------|-------|-------|-------|-------|-------|
|pep recall|0.650|0.656|0.660|0.657|0.652|
Our findings show:
- λ **impacts performance**, with neither extreme values being optimal, validating our choice of λ=0.5 which balances PMD and RMD contributions.
- Even with sub-optimal λ values, RankNovo maintains peptide recall ≥0.650, still outperforming both ByNovo (0.623) and ContraNovo (0.618), demonstrating the framework's **inherent robustness**.
```While computational constraints prevented exhaustive hyperparameter analysis, our experiments across five λ values provide strong evidence for the effectiveness of our selected configuration and the overall robustness of RankNovo's approach.```
> a discussion on computational complexity and runtime is essential to provide a more comprehensive understanding of its feasibility in real-world applications.
Thank you for this important question.
**Appendix E.5** has already discussed RankNovo's inference speeds. As expected, RankNovo is slower than single-model approaches—a natural trade-off for reranking frameworks. The main bottleneck isn't reranking itself but gathering peptide candidates from base models, which require sequential autoregression and beam search, while RankNovo's inference needs just one attention forward pass.
RankNovo offers a flexible speed-performance trade-off addressing real-world concerns. Users can:
1.Use fewer base models/candidates when prioritizing speed
2.Scale up when maximum accuracy is critical
These findings are documented in **Appendix D.2 Table 12**. This adaptability is valuable in proteomics research and clinical settings where resource constraints and accuracy requirements vary significantly, enabling researchers to make informed decisions based on their specific constraints and objectives.
> The zero-shot experiment lacks clear justification. How does it relate to the challenge of heterogeneous mass spectrometry data (Lines 80-82)? Is it testing generalization ability? Should it be framed as an OOD test?
Thank you for this important question about our zero-shot experimental setup. Our zero-shot experiments evaluate RankNovo's generalization to unseen base models, which indeed constitutes an OOD test as you correctly suggested. We train RankNovo using outputs from only a subset of base models, then test its ability to rerank predictions from all models, including previously unseen ones. ```This capability is crucial for practical applications. As new sequencing models emerge, users can apply our released checkpoint directly without retraining, ensuring RankNovo's long-term utility.```
Regarding the heterogeneous MS data challenge, this is primarily addressed by **our reranking approach itself**. We deliberately employed six heterogeneous base models, each capable of performing well on different segments of the heterogeneous data (Fig. 1B). RankNovo functions as a meta-reranker that leverages collective knowledge while minimizing individual biases, directly addressing heterogeneous data distribution challenges, which is the advantage of ensemble learning proven in previous studies [1-3].
[1]End-to-end training of CNN ensembles for person re-identification
[2]Don't take the easy way out: Ensemble based methods for avoiding known dataset biases
[3]Exploring model learning heterogeneity for boosting ensemble robustness | Summary: The paper outlines a new reranking strategy, wherein the proposed meta-model is capable of ranking candidate peptide sequences for a given tandem spectrum. The proposed meta-model, RankNovo, innovates on existing approaches by (a) using axial attention to derive latent space representations, and (b) utilizing two novel metrics, PMD and RMD as supervision signals. The selected metrics enable a precise quantification of the predicted and true peptide sequences. Results on two benchmark datasets demonstrate the superiority of RankNovo over existing methods for de novo peptide sequencing from mass spectrometry data.
## Update after rebuttal:
I had assigned a score of 4 to the paper after my original review and the other reviews and author rebuttals have not changed my opinion, as I still believe that this work is a valuable contribution to this field.
Claims And Evidence: The paper primarily makes two major claims:
1. RankNovo surpasses the performance of existing approaches to de novo peptide sequencing. This is substantiated by extensive results in the main paper and the appendices.
2. RankNovo is able to generalize to unseen models without any additional training. The robustness of RankNovo to base models is is borne out by both the ablation study and the zero-shot performance benchmark.
Methods And Evaluation Criteria: RankNovo is benchmarked against 7 other models, some of which were also included as based models, on benchmark datasets. Amino acid precision and peptide recall are reported as metrics when comparing performance. The use of these metrics is justified in Section 4.1, with the peptide recall seemingly the more important metric for the task of de novo sequencing, but amino acid precision being used for completeness’ sake.
I found the evaluation to be fairly comprehensive and pertinent to the task at hand. But I have an issue with the choice of benchmark models. Specifically, the incorporation of base models as a benchmark appears to be “setting the stage” favorably for RankNovo; Casanovo V2, ContraNovo and ByNovo are included among the base models and consistently rank among the best performing benchmarks. Hence, it is not apparent if the superior performance of RankNovo over the other models is more attributable to the base models themselves being better, or the ability of the RankNovo framework to rerank peptides.
Theoretical Claims: There are no theoretical claims included in this work.
Experimental Designs Or Analyses: The experimental design appears to be sound with standard benchmark datasets used. The authors also report residue-level metrics in an effort to be consistent with existing literature. However, my qualm about the use of benchmark models as base models applies here too.
Supplementary Material: I have cursorily read the supplementary material but have not checked the details.
Relation To Broader Scientific Literature: The paper addresses a pertinent need in existing literature around de novo peptide sequencing. Scoring and ranking peptides based on the observed spectrum remains a difficult problem. Hence, a post-hoc reranking approach could be a great addition to the current toolbox for de novo peptide sequencing; rather than just another approach, RankNovo leverages the predictions of existing approaches on its way to delivering a more accurate peptide ranking.
Essential References Not Discussed: None that I can think of.
Other Strengths And Weaknesses: I found the ideas presented in this paper interesting and well-grounded. In particular, the use of RMD and PMD to provide supervision signals is logical but innovative and could potentially be utile to other works in the peptide-spectrum space. I also appreciate the thoroughness of the evaluation, with the inclusion of an ablation study and zero-shot performance.
On the other hand, the paper is a little difficult to follow in places. At several points, it was a struggle to make sense of what the authors meant, which detracted from the overall idea and innovations presented therein. I have pointed out specific instances in the next section; many of these comments pertain to dangling references or missing explanations of notations.
Other Comments Or Suggestions: 1. After Eq. 3, when forming the MSA embedding, it is not clear if the stacking is row-wise or column-wise.
2. The description of the backbone of RankNovo in Section 3.4 would be easier to understand if it were presented in the form of a diagram.
Questions For Authors: 1. In the last paragraph of the introduction, the authors state that RankNovo is “the first deep learning-based reranking framework”. I think it is worth reiterating or clarifying that it is the first DL-based reranking framework since some of the base models included already feature deep learning architectures.
2. Just after Eq. 1, it is stated that “Intensity signals are projected to d dimension with a linear layer because of its relatively lower accuracy”. What does “its” refer to here – the linear layer or intensity signal?
3. What does $\mathbf{r_i}$ refer to in Eq. 5?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your detailed comments. We address your concerns as follows:
> Is the choice of base models making benchmark comparison favorably for RankNovo? Should performance improvement be attributed to base models' capabilities or the reranking model?
Thank you for this important question about the contribution of base models versus our reranking approach.
Our benchmark comparison is methodologically sound because: 1) We included all available baselines in our evaluation, **including the previous SOTA ContraNovo**. 2) Our methodology **follows established practices in ensemble research** [1][2][3], where comparing ensemble methods against constituent base models is standard.
To address whether RankNovo's performance stems from stronger base models or the reranking approach itself, our **ablation studies in Appendix D.2 and Table 9** are informative. When training and evaluating RankNovo using only five weaker models, our reranking approach still achieved 0.651 peptide recall - significantly **outperforming the excluded strongest ByNovo model** (0.623). ```This confirms that while high-performing base models contribute to results, the reranking methodology itself provides substantial independent value.```
[1] Routing to the Expert: Efficient Reward-guided Ensemble of Large Language Models
[2] Llm-blender: Ensembling Large Language Models with Pairwise Ranking and Generative Fusion
[3] End-to-end Training of CNN Ensembles for Person Re-identification
> After Eq. 3, when forming the MSA embedding, it is not clear if the stacking is row-wise or column-wise.
Thank you for this important clarification question.
The stacking is performed **row-wise**. For a single peptide sequence embedding with dimensions (L, D)—where L represents the number of amino acids (after padding) and D represents the embedding dimension—the MSA embedding is constructed by row-wise stacking the embeddings of N peptide candidates to form a tensor of shape (N, L, D).
This MSA embedding construction **follows established practices** in the field, consistent with works such as MSA Transformer [1], AlphaFold2 [2], and AlphaFold3 [3].
[1] MSA Transformer
[2] Highly Accurate Protein Structure Prediction with AlphaFold
[3] Accurate Structure Prediction of Biomolecular Interactions with AlphaFold 3
> The description of the backbone of RankNovo in Section 3.4 would be easier to understand if it were presented in the form of a diagram.
Thank you for this constructive suggestion. While we included a model backbone illustration in **Figure 2(B)**, we acknowledge it doesn't fully capture all technical details of RankNovo's architecture.
To address this, we've created a more detailed diagram focusing on **tensor shapes and attention mechanisms**, available at <https://anonymous.4open.science/r/RankNovo-F2FB/NewFig.png>.
> I think it is worth reiterating or clarifying that RankNovo is the first DL-based reranking framework since some of the base models included already feature deep learning architectures.
Thank you for this important point. You're right - some base models do incorporate deep learning architectures.
But To clarify: ```RankNovo is novel as the first deep learning-based reranking framework specifically for de novo peptide sequencing. While existing models (including the base models) directly decode peptides through regression, RankNovo introduces a fundamentally different approach by evaluating multiple peptide candidates and selecting the optimal prediction.```
This reranking mechanism represents a methodological shift, allowing RankNovo to function as a **universal enhancement module** that integrates with various de novo sequencing models regardless of their architecture.
> Just after Eq. 1, it is stated that “Intensity signals are projected to d dimension with a linear layer because of its relatively lower accuracy”. What does “its” refer to here – the linear layer or intensity signal?
Thank you for highlighting this ambiguity. "Its" refers to the **intensity signal**, not the linear layer.
Intensity signals in MS/MS spectra inherently have lower accuracy compared to m/z signals due to measurement errors, as documented in Chang C, et al. [1]. Given this limitation, we follow Casanovo and ContraNovo by embedding intensity signals using a simple linear transformation, which is less sensitive to tiny changes than the sinusoidal encoding used for m/z signals.
[1] Quantitative and In-Depth Survey of the Isotopic AbundanceDistribution Errors in Shotgun Proteomics
Q6: What does ri refer to in Eq. 5?
A6: In Equation 5:
$$g = E_{i \neq j} [P(r_i, r_j)] = \frac{1}{n(n-1)} \sum_{i=1}^{n} \sum_{j=1, j \neq i}^{n} |M(r_i) - M(r_j)|$$
The term $r_i$ refers to **the i-th residue (amino acid) in our vocabulary**. This equation defines the gap penalty $g$ as the average mass difference between all possible pairs of non-identical amino acids. | Summary: This paper presents RankNovo, the deep reranking framework that enhances de novo peptide sequencing by leveraging the complementary strengths of multiple sequencing models.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: ### Dataset
- Although the model has been tested on nine-species v1 & v2, it is better to evaluate it on the latest benchmark datasets [1], which include more diverse data.
### Method
- Table 5 shows that applying col-wise attention results in only a minor improvement (0.007 in Avg. Peptide Recall) compared to models without it while increasing computational complexity.
- Tables 17 and 18 show that RankNovo's inference speed is approximately 6–8 times slower than that of the base model.
- The innovation of the method is limited. It introduces the rerank to the de novo peptide sequencing. However, this task requires accurate inference rather than simply ranking peptides. **If none of the peptide sequences output by the base models are exact matches, the final output of the model will also be incorrect sequences, thus compromising the flexibility of de novo peptide sequencing.**
[1] NovoBench: Benchmarking Deep Learning-based De Novo Peptide Sequencing Methods in Proteomics, NeurIPS 2024.
Theoretical Claims: This paper does not have a theoretical part, so there is no need to check it.
Experimental Designs Or Analyses: - The comparative methods in this paper should include more baseline for a more comprehensive comparison[1][2][3][4].
[1] De novo peptide sequencing with InstaNovo: Accurate, database-free peptide identification for large scale proteomics experiments.
[2] AdaNovo: Adaptive De Novo Peptide Sequencing with Conditional Mutual Information.
[3] π-PrimeNovo: An Accurate and Efficient Non-Autoregressive Deep Learning Model for De Novo Peptide Sequencing.
[4] π-HelixNovo for practical large-scale de novo peptide sequencing.
Supplementary Material: I have checked the supplementary materials in the appendix.
Relation To Broader Scientific Literature: - This paper introduces the rerank technique from NLP to the field of de novo peptide sequencing.
Essential References Not Discussed: - The key contribution of this paper is ranking the output of de novo peptide sequencing models conditioned on the spectral information. However, the final results can only be selected from the candidate peptides, which limits flexibility. SearchNovo [1] leverages spectral information for database searching and uses the retrieved candidate peptides to enhance de novo peptide sequencing, making it more flexible in comparison. This aspect is not discussed in the paper.
[1] Bridging the Gap between Database Search and De Novo Peptide Sequencing with SearchNovo, bioRxiv 2024.
Other Strengths And Weaknesses: - The paper is written clearly.
Other Comments Or Suggestions: ### Typo
- In Table 15, the Avg. Peptide Recall values are 0.653. However, in Line 1010, it is written as 0.657.
Questions For Authors: - Fix typos.
- Evaluate on the latest benchmark datasets.
- Add more baseline models for comparison.
- Explain the advantages of the ranking technique in de novo peptide sequencing compared to the latest database-enhanced method, SearchNovo.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your detailed comments. We address your concerns as follows:
> It's better to provide benchmark performance of RankNovo on NovoBench.
Thank you for your suggestion. Following your recommendation, we expanded our evaluation to include all three **NovoBench** data sources: **Seven-Species (from DeepNovo), Nine-Species-V1, and HC-PT (from InstaNovo).** We construct Seven-Species and HC-PT as instructed in NovoBench.
| Model | Nine-Species (yeast) | Seven-Species (yeast) | HC-PT |
|-------|---------------------|----------------------|-------|
| Casanovo | 0.48 | 0.12 | 0.21 |
| Instanovo | 0.53 | - | 0.57 |
| AdaNovo | 0.50 | 0.17 | 0.21 |
| HelixNovo | 0.52 | 0.23 | 0.21 |
| SearchNovo | 0.55 | 0.26 | 0.45 |
| ByNovo (best base model) | 0.68 | 0.03 | 0.82 |
| RankNovo | 0.70 | 0.04 | 0.89 |
The results on NovoBench show that ```RankNovo is consistently superior on Nine-Species and HC-PT datasets.``` On the other hand, zero-shot performance of models trained on MassiveKB (ByNovo and RankNovo) don't perform normally on Seven-Species. **Further analysis shows that the reason lies in the distribution misalignment between MassiveKB and Seven-Species.** The former is collected by high-resolution MS equipment, while the latter is composed of low-resolution data, which makes the phenomenon explainable.
```We greatly appreciate your valuable suggestion and will incorporate these additional benchmark results and the discussion about data composition into our camera-ready manuscript.```
> More baselines should be included for a more comprehensive comparison.
Thank you for this valuable suggestion. We have expanded our evaluation to include **InstaNovo, AdaNovo, π-PrimeNovo, and π-HelixNovo**. Results show **RankNovo outperforms all these baselines across all metrics on Nine-Species-V1**, a brief summary is provided below.
| Model | Avg. Peptide Recall |
|-------|---------------------|
| Casanovo | 0.481 |
| InstaNovo | 0.532 |
| AdaNovo | 0.505 |
| PrimeNovo | 0.638 |
| HelixNovo | 0.517 |
| RankNovo | 0.660 |
Additionally, as mentioned in our response to Q1, **we've included comprehensive comparisons with InstaNovo, AdaNovo, and π-HelixNovo on NovoBench.**
**These expanded baselines and comparisons will be included in the camera-ready version.**
> It's recommended to discuss works combining database search and de novo sequencing, such as SearchNovo.
Thank you for this valuable recommendation. We have expanded **Section 2.1** to include **SearchNovo**, a concurrent development that combines database search with de novo sequencing. SearchNovo employs a retrieval mechanism to identify similar peptide-spectrum matches and integrates prefix peptide sequence information through a fusion layer.
This addition provides information on the broader research landscape. **We will include this expanded discussion in the camera-ready version.**
> RankNovo requires the target peptide predicted by one of the base models and this may harm flexibility. The advantages of reranking methods compared to SearchNovo should be explained.
Thank you for this important question about flexibility limitations and advantages.
```We view SearchNovo and RankNovo as complementary approaches with distinct advantages:```
- **Inference-time vs. Training-time Improvement**
While SearchNovo enhances itself through novel architectures and training techniques, RankNovo explores extracting additional value from existing model outputs at inference time. As models approach optimal performance on available training data, **the collective information in prediction data represents an untapped resource that RankNovo specifically leverages.**
- **Training-free Scaling and Adaptation**
SearchNovo primarily uses information from the single most similar PSM, limiting its capabilities post-training. In contrast, RankNovo's axial-attention modules integrate features from multiple base models and demonstrate training-free performance scaling with more candidates **(Section 4.3, Fig 3(A), Table 12)**. This allows practitioners to balance efficiency and performance as needed.
- **Controlled Prediction Space with Enhanced Reliability**
While RankNovo requires at least one base model to correctly sequence the peptide, **this constraint brings practical advantages**. By limiting modifications to existing predictions rather than generating novel sequences, **RankNovo minimizes the risk of introducing new errors**, **providing heightened reliability and interpretability** for real-world applications. Despite this constrained prediction space, **RankNovo achieves state-of-the-art performance across all benchmarks.**
In conclusion, these approaches address different aspects of sequencing challenge and can be integrated - SearchNovo could serve as a base model within RankNovo's framework, potentially combining respective advantages.
> Fix typos.
Sorry for the confusion, the typos will be fixed in the final script.
---
Rebuttal Comment 1.1:
Comment: Thank you for the reviewer's feedback. I have improved my score. | Summary: This paper introduces RankNovo, an innovative deep reranking framework designed to enhance de novo peptide sequencing accuracy. RankNovo leverages the complementary strengths of multiple sequencing models to overcome the limitations of individual approaches.
RankNovo represents the first deep learning-based reranking framework for de novo peptide sequencing, introducing several innovations: it models candidates as multiple sequence alignments with axial attention to extract cross-candidate features while reducing computational complexity; develops mass-focused metrics (PMD and RMD) that provide precise supervision by quantifying differences at both sequence and residue levels; achieves state-of-the-art performance on benchmark datasets, surpassing its strongest base model by 6.1% and the previous best method by 4.3% in peptide recall; demonstrates robust zero-shot generalization to unseen models; and significantly improves discrimination between amino acids with similar masses, addressing a key challenge in mass spectrometry-based peptide identification.
The dual-track architecture combines spectrum feature extraction via a Transformer encoder with peptide candidate processing through axial attention, joined by a cross-attention mechanism. By jointly optimizing PMD and RMD objectives, RankNovo offers a flexible trade-off between inference time and performance, advancing the frontier of accurate de novo peptide sequencing.
Claims And Evidence: The majority of RankNovo's claims are well-supported by evidence, with comprehensive experiments and analyses. The paper convincingly demonstrates state-of-the-art performance through extensive benchmarking on standard datasets, with Tables 1-2 showing clear improvements over previous methods; the effectiveness of the novel PMD and RMD metrics is verified through ablation studies in Table 3; the complementary contributions of base models are illustrated through detailed analysis in Fig. 3(B) and Fig. 7; and improved discrimination between similar-mass amino acids is supported by comprehensive data in Fig. 3(D) and Fig. 5.
However, regarding zero-shot generalization, while Fig. 3(A) demonstrates RankNovo's ability to rerank predictions from unseen models, the paper lacks validation on more challenging datasets with diverse post-translational modifications (PTMs). Testing on specialized PTM-rich datasets, or synthetic phosphopeptide collections would provide stronger evidence of true generalization capability across the complex PTM landscape that characterizes real-world proteomics applications. This additional validation would strengthen the claim that RankNovo can generalize to difficult cases beyond the standard benchmarks.
Methods And Evaluation Criteria: RankNovo's methods and evaluation approach are aligned with the de novo peptide sequencing problem. The list-wise reranking approach appropriately handles multiple candidate sequences with subtle differences, while the MSA representation with axial attention efficiently captures both within-peptide patterns and cross-candidate information. The mass-centric metrics (PMD and RMD) are specifically tailored to the chemistry-driven nature of peptide sequencing, and the cross-attention mechanism effectively integrates spectrum information. The evaluation methodology is thorough, using established benchmark datasets (MassIVE-KB, 9-species-V1, 9-species-V2) with standard metrics like peptide recall and amino acid precision, while incorporating diverse base models and comprehensive ablation studies to assess different components' contributions.
**Minor Limitations**
Limited PTM Coverage: While the datasets include some post-translational modifications, more extensive testing on heavily modified peptides would strengthen the evaluation.
Single Instrument Type: Testing on data from a broader range of mass spectrometry instruments would better demonstrate cross-platform generalization.
Theoretical Claims: The RankNovo paper primarily focuses on empirical contributions rather than making significant theoretical claims that require formal proofs. There are no mathematical theorems, lemmas, or propositions in the paper that would necessitate rigorous proof verification. The main algorithmic contributions—the list-wise reranking approach with axial attention and the definition of PMD and RMD metrics—are presented as algorithmic descriptions with pseudocode rather than theoretical results.
The paper does briefly discuss computational efficiency claims regarding the reduction from O(n²) to O(n) complexity with axial attention, but this is a well-established result in the literature. Similarly, while the authors provide theoretical motivation for their design choices, these are presented as design rationales rather than formal claims requiring proof. In summary, the paper's contributions are predominantly empirical, focusing on the practical design, implementation, and evaluation of a novel framework for peptide sequencing.
Experimental Designs Or Analyses: Limited PTM Coverage: While the datasets include some post-translational modifications, more extensive testing on heavily modified peptides would strengthen the evaluation.
Supplementary Material: No Supplementary Material
Relation To Broader Scientific Literature: RankNovo integrates concepts from both proteomics and machine learning domains, building upon established research trajectories. In proteomics, it extends the evolution from traditional database search methods (SEQUEST, Mascot) to deep learning approaches for de novo peptide sequencing, following Casanovo's transformer architecture and DeepNovo's neural network foundation. However, RankNovo distinctly reframes peptide sequencing as a reranking problem rather than a generation task, adapting a strategy that has proven effective in protein structure prediction and other computational biology applications.
From a methodological perspective, RankNovo adapts several established machine learning techniques to the peptide sequencing domain. Its list-wise reranking approach parallels methods in information retrieval and NLP, while the axial attention mechanism borrows from efficient transformer variants in computer vision. The multiple sequence alignment representation repurposes traditional bioinformatics techniques, and the mass-focused metrics introduce domain-specific supervision signals. This combination addresses specific challenges in de novo peptide sequencing, particularly the discrimination between amino acids with similar masses. The paper's demonstration of zero-shot generalization connects to research on model-agnostic meta-learning, though the broader applicability of these techniques beyond peptide sequencing remains to be fully explored.
Essential References Not Discussed: The RankNovo paper, while generally thorough in its literature review, overlooks several important research directions that would better contextualize its contributions. In proteomics, the paper doesn't acknowledge ensemble approaches like "PepExplorer" (Leprevost et al., 2014) and "iProphet" (Shteynberg et al., 2011), which pioneered the combination of multiple search engines for peptide identification. Similarly, when discussing PTM analysis capabilities, the paper would benefit from referencing specialized frameworks like "Open-pFind" (Chi et al., 2018) and "pNovo+" (Chi et al., 2013) that specifically address the challenges of identifying post-translational modifications.
From a methodological standpoint, the paper misses opportunities to connect with fundamental work on efficient attention mechanisms such as "Linformer" (Wang et al., 2020) and "Performer" (Choromanski et al., 2020), which pioneered approaches to reduce attention complexity from O(n²) to O(n). Additionally, when presenting its list-wise ranking approach, the paper fails to acknowledge seminal works in information retrieval like "ListNet" (Cao et al., 2007) and "ListMLE" (Xia et al., 2008) that established the mathematical foundations for list-wise ranking. These omissions don't invalidate RankNovo's contributions but limit readers' ability to understand how it builds upon established techniques across multiple disciplines.
Other Strengths And Weaknesses: refer to previous section
Other Comments Or Suggestions: Refer to the previous section
Questions For Authors: Refer to previous section
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your detailed comments. We address your concerns as follows:
> The paper lacks validation on more challenging datasets with diverse post-translational modifications (PTMs).
We sincerely appreciate your insightful observation regarding the need for validating RankNovo on datasets with more diverse post-translational modifications. You've highlighted a crucial aspect of peptide de novo sequencing that directly impacts the practical utility of our approach in comprehensive proteomics studies.
``` In our current Nine-Species-V1 benchmark, we included three classes of PTMs: Oxidation-M, Deamidation-N, and Deamidation-Q.``` The promising results of these modifications demonstrated that RankNovo can effectively enhance performance on PTM-containing spectra when such modifications are incorporated during training.
To address your valuable suggestion, we conducted additional experiments on a more diverse set of biologically significant PTMs from the dataset compiled by Zolg et al. [1]. ``` Specifically, we selected three functionally important modifications: Acetylation (K), Dimethylation (K), and Phosphorylation (Y).``` Each PTM included ~62.5K spectra split 8:1:1 for training/validation/testing.
Given that these new PTMs were not in the original vocabulary of our models, we performed necessary fine-tuning procedures. We combined the training and validation datasets across all three PTMs, reinitialized the embedding and final linear layers to accommodate the expanded vocabulary, and fine-tuned both the six base models and RankNovo accordingly.
**Our results consistently demonstrated RankNovo's superiority:**
- Acetylation (K): 5.6% improvement over the best base model (0.889 vs 0.833)
- Dimethylation (K): 2.8% improvement (0.487 vs 0.457)
- Phosphorylation (Y): 6.7% improvement (0.589 vs 0.522)
| PTM | Casanovo | ContraNovo | ByNovo | R-Casanovo | R-ContraNovo | R-ByNovo | RankNovo |
|---|---|---|---|---|---|---|---|
| Acetylation (K) | 0.819 | 0.820 | 0.830 | 0.833 | 0.806 | 0.832 | 0.889 |
| Dimethylation (K) | 0.455 | 0.459 | 0.458 | 0.458 | 0.401 | 0.457 | 0.487 |
| Phosphorylation (Y) | 0.476 | 0.473 | 0.520 | 0.491 | 0.519 | 0.522 | 0.589 |
These compelling results further validate that our deep learning reranking framework maintains its potential across a more diverse spectrum of PTMs, underscoring the robustness and broader applicability of our approach for advanced proteomics research.
[1]ProteomeTools: Systematic characterization of 21 post-translational protein modifications by liquid chromatography tandem mass spectrometry (LC-MS/MS) using synthetic peptides
> Testing on data from a broader range of mass spectrometry instruments would better demonstrate cross-platform generalization.
Thank you for highlighting this limitation. While our original benchmarks (Nine-species-v1/v2) used only **Q Exactive instruments**, our training dataset (MassiveKB) **includes PSMs from various instruments, potentially enabling cross-platform generalization.**
To address your concern, we note that the **PTM datasets** mentioned in our response to Q1 were collected using **Orbitrap Fusion Lumos** instruments, different from our benchmark's Q Exactive instruments. RankNovo consistently outperformed all base models on these Orbitrap Fusion Lumos datasets across all three PTMs (Acetylation-K, Dimethylation-K, and Phosphorylation (Y)).
```These results provide strong evidence of RankNovo's effectiveness beyond Q Exactive instruments, demonstrating its cross-platform generalization capabilities.``` We appreciate this suggestion which helped strengthen our evaluation.
> The manuscript would benefit from addressing additional research directions in the literature review to better contextualize scientific contributions.
Thank you for this valuable suggestion. We decided to incorporate the recommended research directions into our final manuscript as follows:
- **Introduction (Section 1):**
Added ensemble approaches for multiple search engines **(PepExplorer, iProphet)**
- **Related Work - De Novo Peptide Sequencing (Section 2.1):**
Incorporated PTM identification works **(Open-pFind, pNovo+)**
- **Related Work - Candidate Reranking (Section 2.2):**
Expanded with list-wise reranking algorithms **(ListNet, ListMLE)**
- **Related Work - Axial Attention (Section 2.3):**
Enhanced with efficient attention mechanisms **(Linformer, Performer)**
These additions better contextualize our contributions within existing literature and help readers understand how our work bridges deep learning techniques and proteomics applications. We appreciate your feedback which has improved the manuscript's depth and quality. | null | null | null | null | null | null |
Towards a Mechanistic Explanation of Diffusion Model Generalization | Accept (spotlight poster) | Summary: In this paper, the authors analyze inductive biases of diffusion and relate these to the generalization capabilities of diffusion models. The authors start by examining the network denoiser approximation error, where the approximation error is defined as the deviation of the prediction from the optimal denoiser, where the optimal denoiser equals the weighted average over the images of the dataset. The authors analyze how this difference behaves for different architectures and find similar behavior across them. The authors proceed to show that the generalization of these models arises as a product of locally biased operations, showing this by approximating the operations using patch-based empirical denoisers. Finally, the authors propose PSPC, a training-free denoiser and show that PSPC and the other denoisers exhibit larger similarity between each other than that to the optimal denoiser, as well as that the samples produced with PSPC exhibit larger structural similarities.
Claims And Evidence: I have several significant concerns regarding the paper's claims and methodology. Firstly, I found that many of the authors' assertions are unsupported by evidence or lack clear definitions. One of the primary motivators of the work is the concept of "local bias," but the authors fail to define or reference this term. This omission makes it challenging to understand the paper's main arguments and results.
In the following section, the authors do not really provide sufficient motivation and the "local bias" is not properly introduced or referenced. Furthermore, as the main claim of the paper is that this inductive bias is similar across various architectures and noise schedulers, it is disappointing that the authors have performed the remaining experiments only using DDPM++. I did not find any analyses corresponding to other architectures in the supplementary material either.
The biggest concern I have is that many of the effects which the authors analyze (e.g., gradient heatmaps and local behaviour) might be coming from the fact that all four chosen architectures use attention mechanisms. In a related work (which the authors correctly point out) by Kadkhodaie et al. (2023), it is suggested that generalization occurs as the inductive biases of networks interplay well with geometrically adaptive harmonic bases.
Methods And Evaluation Criteria: I believe that the problem of understanding generalization in diffusion models requires either more rigorous approaches (from a theoretical perspective) that can be backed up by numerical experiments, or in the case of pure numerical experiments (such as in this paper), much larger and stronger set of evaluations.
For example, the methodological issues in Figure 2 raise concerns: the authors display the MSE vs . Optimal Denoiser only for time up to t=30, but it is visible that for both U-ViT and NCSN++ the differences start to increase after t=30. Additionally, the averaging of 10,000 samples might suppress differences between the networks. To address this, the authors could focus on averaging paths that are semantically similar, such as averaging only 100 paths that generate an image from a same class, where the class has been obtained using a pre-trained classifier. This would help reduce the "averaging effect" and provide a more accurate representation of the similarities of the models' behavior.
Furthermore, the following section lacks sufficient motivation for the "local bias" concept, and the authors do not provide adequate analysis or references to support their claims. Moreover, the main claim of the paper – that the inductive bias is similar across various architectures and noise schedulers – is not adequately supported by experiments, as all experiments past Figure 2. only use DDPM++.
Also, the effects analyzed in the paper (e.g., gradient heatmaps and local behavior) might be attributed to the fact that all four chosen architectures use attention mechanisms. A related work by Kadkhodaie et al. (2023), which the authors correctly cite, suggests that generalization occurs due to the interplay between inductive biases and geometrically adaptive harmonic bases. In my opinion, this paper provides a stronger and cleaner argument towards the inductive biases. But, more importantly, the authors performs analysis using a UNet architecture, as well as BF-CNN, a version of DNCNN network, neither of which use attention. In my opinion, it would be crucial to include either of these architectures (or both) , in order to be able to argue that the methodology in sections 4 and 5 can actually be attributed to generalization and not just attention.
Theoretical Claims: There were no proofs or theoretical results in this paper.
Experimental Designs Or Analyses: Please see above.
Supplementary Material: I went through the supplementary material.
Relation To Broader Scientific Literature: Although the phenomena of diffusion's ability to generalize is of great interest, I find that the contribution of this paper lacks, as well as the validity of their claims.
Essential References Not Discussed: I did not find any references to be missing, although I believe that the work by Kadkhodaie et al. (2023) should've been referenced much earlier as they provide an important and significant contribution to explanation of the generalization phenomena in diffusion models.
Other Strengths And Weaknesses: Please see above.
Other Comments Or Suggestions: 1. It is unclear why the authors included the sentence "Corroborating findings of Niedoba et al…": what are the findings that the authors are referring to here and why is this paper relevant to the problem at hand? The authors should point this out when citing it.
2. As mentioned previously, in Section 3.1, the authors analyze "local bias" without defining it or giving a reference to the definition. I am not aware of what do they mean by this, and this really dampens the strength of the motivation for the section, as well as the subsequent ones.
3. I think that the authors should include what does "near SOTA" mean, and provide some performance of their trained models (e.g., please provide FID) to strengthen the validity of their experiments.
4. What do the authors mean in line 152 when they write “of the network denoiser output at pixel (x,y)”? What is pixel "(x,y)"?
5. a) In equation (8), shouldn’t the first argument of the network D_theta be z and not x?
5. b) In same equation, what is the notation x,y,c (in the subscript) supposed to represent? Also, why is there y in equation (8), as from their introduction of the network they use is unconditional and does not take into account information y?
6. Finally, what do the authors mean by "drawing 10000 z samples from the forward process" (around line 150)? Do they mean sampling 10000 z samples and running forward process of 150 steps each time?
Questions For Authors: Please see above.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: ## General Comment
We'd like to thank the reviewer for their thorough review. We have summarized and responded to the key points of your review below in the limited available space. If we have missed any of your points, we welcome further discussion.
## Methods & Evaluation
**Network errors for $t>30$**
The reviewer is right. U-ViT & NCSN++ MSE is higher for $t> 30$. As U-VIT predictis $\epsilon$, we compute $x$ as $x = z - t \epsilon_\theta(z, t)$. This scales $\epsilon_\theta$ errors by $t$ (see L162). NCSN++ errors fluctuate near $t=40$, but are ~4x smaller than peak errors at t=3. We believe this is a training artifact. Further, Fig. 9 implies high-$t$ errors have little impact on generalization. Both U-ViT& NCSN++ have SSCD similarities > 0.7 with every model, indicating near-copy samples.
**"Averaging effect"**
Beyond MSE, Fig. 1, 2, & 13-20 provide evidence that network outputs are qualitatively similar for the same input. To quantify this output similarity, we have measured output cosine similarity at $t=3.2$ for shared forward process inputs [[Link](https://drive.google.com/file/d/1hk6b-m7meFDw6qjw70_579REIpevgLIr)].
**Lack of other networks baselines past Fig. 2**
This assertion is incorrect. Past Fig 2, Fig 9 and 13-20 all compare various models. However, for transparency and completeness, we will include the following modified versions of Fig 3, 4, and 6 to the appendix:
- [Fig 3](https://drive.google.com/file/d/1U1x3id-8ECHJugKNPVMouBRyHho4tpsB/) All models have a similar trend between gradient concentration and $t$.
- [Fig. 4](https://drive.google.com/file/d/1qLbefgMe_FI_dGph9LRPLSbZLcXGamHn/) Patch posterior MSE is similar for all models, except high-$t$ U-ViT.
- [Fig. 6](https://drive.google.com/drive/folders/1_296T4B96x_r7xAzczftO5YB9PlLZ0F8). We compare our methods against each network baseline. For UViT & DiT, we only have CIFAR checkpoints.
We are happy to provide additional figures upon request.
**On the impact of attention**
We do not believe attention layers are the cause of our observations. To test this, we trained an attention-free DDPM++ model on CIFAR. We find the MSE of PSPC is mostly unaffected[[Link](https://drive.google.com/file/d/1H6UrOp4looPt9m8FAVtcuvn4YlGW0Tu9)]. Qualitatively, the attention-free model's outputs are similar to other models [[Link](https://drive.google.com/file/d/1fnjmlGHC77uiXUYaJkOJI2cMkkDmSyO1)]. Similar to our work, Kamb & Ganguli find their patch-based denoiser performs **better** against attention free models.
**On the inclusion of U-Net and BFCNN**
This paper's aim was to investigate generalization in modern image diffusion models, which overwhelmingly utilize attention layers. In addition to the lack of attention, the networks of Kadkhodaie et al. (2023) have unique properties such as no bias terms, and no $t$ conditioning. Due to these architectural quirks, we believe that analyzing commonly used U-Net and DiT architectures is of greater value to the community than BFCNN.
## Other Comments
**Q1**
Niedoba et al. (Fig. 3) finds network denoisers are biased for $t \in [0.3, 10]$, which matches our Fig 2 findings. For clarity, we suggest the change:
>Across all architectures, we observe similar behaviour to Niedoba et al. (2024), Figure 3 - network denoisers exhibit low MSE for both small and large values of $t$, but substantial error for $t \in [0.3, 10]$.
**Q2**
We use "local inductive bias" to describe a preference for denoising functions where outputs at a spatial position (x,y) are more influenced by nearby input pixels than distant ones. Goyal & Bengio (2022) define inductive bias as a tendency for learning algorithms to favour solutions with certain properties. Kamb & Ganguli (2024) provide a more formal definition of the bias, calling it "locality." However, since Sec 3.1 mainly motivates our hypothesis on generalization via local denoising, we felt a formal definition was unnecessary. We propose the following changes:
- Replacing “local bias” with “local inductive bias” throughout
- Rephrasing the intro of Sec. 3.1 (L142) as: “One potential inductive bias (Goyal & Bengio 2022) of network denoisers is local inductive bias, where denoiser outputs are more sensitive to spatially local perturbations of $z$ than distant ones”
**Q3**
The networks analyzed have high performance but are not SoTA. Here are their FID scores
|Model|CIFAR|FFHQ|AFHQ|
|-|-|-|-|
|DDPM|1.97| 2.39|1.96|
|NCSN|1.98| 2.53|2.16|
|DiT|9.08|||
|UViT|3.11|||
**Q4**
Pixel (x,y) is the output pixel at spatial location x,y where $x \in \\{1, ..., w\\}, y \in \\{1, ..., h\\}$ for image height $h$ & width $w$. We'll update this notation for clarity
**Q5**
We'll fix this error.
**Q6**
The subscript x,y,c denotes indexing at position (x,y) and channel $c \in \{1, 2, 3\}$. We'll reword this to improve clarity.
**Q7**
For each $t$ value, we draw 10k samples from the forward process $z \sim p_t(z | x^{(i)}) p_D(x)$, with $p_t(z | x^{(i)})$ defined on L75.
---
Rebuttal Comment 1.1:
Comment: I would like to express my gratitude to the authors for their comprehensive response, particularly in light of the additional network that does not incorporate attention. I appreciate the effort they have put into addressing the concerns raised.
However, I still have many reservations about the work. I agree with reviewer wsKg that the approach and insight into analyzing diffusion through locality bias are indeed intriguing. Nevertheless, several questions remain unanswered:
- Is the CIFAR dataset sufficient to support these claims, or is it too simplistic (specifically in terms of non-attention networks)?
- Is the observed locality a consequence of the dataset's simplicity, or can this claim be generalized to more complex datasets with intricate local and global structures?
- If the generalization phenomenon is indeed linked to local denoising, why is it that the proposed method performs so poorly even on the simple datasets considered?
To strengthen the paper's claims about generalization, I firmly believe that more extensive and rigorous analysis is necessary, especially if the authors rely solely on empirical evidence. These concerns still make me believe that that the paper falls short of acceptance at this time.
I would like to acknowledge that my expertise in the literature may not be as comprehensive as that of other reviewers, and I kindly request that the Area Chairs take this into consideration when evaluating my feedback.
However, as a researcher who has been actively working on advancing our understanding of the generalization capabilities of diffusion models, I must confess that I remain unconvinced by the paper's arguments. My reservations stem from my own experiences and insights gained through working closely with these models, and I hope that my feedback will be taken in the spirit of constructive criticism.
---
Reply to Comment 1.1.1:
Comment: ### **General Comment**
Thank you to the reviewer for engaging with our rebuttal. We appreciate the opportunity to further discuss why we believe our work warrants acceptance. In our previous response, we made a concerted effort to individually address each component of the initial review. Of the issues we initially addressed, only the impact of self-attention was mentioned in the rebuttal reply. We hope this indicates that our responses in the other areas satisfactorily resolved the reviewer’s concerns.
However, in light of the reviewer’s unchanged evaluation, it remains unclear whether our responses did not sufficiently address the original concerns, or whether the addressed concerns were not central to their overall assessment. In either case, more specific feedback would have been appreciated to better understand the basis for the continued reservations, so that we might have addressed them more effectively.
The rebuttal reply also introduces three new concerns not mentioned in the original review. While we are, of course, happy to address them here, we would have welcomed the opportunity to engage with them earlier in the review process.
### **Response to Additional Questions**
> - Is the CIFAR dataset sufficient to support these claims, or is it too simplistic (specifically in terms of non-attention networks)?
We would like to clarify that along with CIFAR, our work also evaluates PSPC on FFHQ and AFHQ which are standard in the field. In the literature, these datasets have been used in similar prior art to support arguments about generalization in diffusion models. For example, the analysis of [1] predominantly relies on CIFAR, those of [2] mostly use CIFAR & FFHQ, and [3] utilizes the same datasets as our work.
We believe Figure 6 (including the alternative versions included in our rebuttal) clearly illustrates that PSPC methods have quantitatively similar relative performance to other denoisers on each dataset. Figure 8 further highlights the qualitative similarities between PSPC and Network denoisers across these datasets.
In terms of non-attention networks, the performance of DDPM++ and the attention-free alternative we trained on CIFAR are nearly identical. We have no reason to believe that this relationship would be substantially different for FFHQ or AFHQ. Unfortunately, we are unable to retrain a attention free DDPM++ variant on these datasets in the time remaining in the discussion period.
[1] Zhang. J. et al, “The emergence of reproducibility and consistency in diffusion models” 2023
[2] Wang B. and Vastola, J.J. “The unreasonable effectiveness of gaussian score approximation” 2024
[3] Li X. et al. “Understanding generalizability of diffusion models requires rethinking the hidden gaussian structure.” 2024
> - Is the observed locality a consequence of the dataset's simplicity, or can this claim be generalized to more complex datasets with intricate local and global structures?
In addition to our comments previously regarding the widespread usage of our chosen datasets, we would also like to mention that all three datasets have both local and global structure. In both AFHQ and FFHQ, there are global structures such as facial shape and pose, as well as local structures such as hair texture. In CIFAR, this is somewhat diminished by the resolution. However, Fig. 2 still clearly shows a mix of global and local structure - a kitten, with globally positioned ears, tail and paws, with local coat colourations.
> - If the generalization phenomenon is indeed linked to local denoising, why is it that the proposed method performs so poorly even on the simple datasets considered?
We respond to this point in the first part of our response to Q1. of reviewer **wsKg**. To elaborate, while it is true that PSPC’s performance is poor compared to network denoisers, it is - to the best of our knowledge - the best available empirical approximation to the output of network denoisers. We believe that this similarity is evidence to reasonably conclude that local denoising is one piece of the diffusion generalization puzzle. However, it is apparent from the differences that there are still missing pieces remaining. We believe that understanding what these remaining missing pieces are is an important and exciting research direction for the community. | Summary: The paper studies the inductive bias of diffusion models that enable generalization. The authors attribute such inductive bias to the locally denoising capability of diffusion models, which is supported by the observation that the network outputs are sensitive to the local perturbation of its input.
Based on this intuition, the authors then propose to reproduce the generalization of diffusion models with a patch-based local denoiser. They find that the resulting denoiser generates similar outputs to those of the real diffusion models.
Claims And Evidence: The paper claims that the PSPC-Flex samples are remarkably similar to those of the diffusion model. However, Figure 8, 21 and 22 shows the generated images of PSPC-Flex is significantly different from those generated from the diffusion model, especially for Figure 21. This implies the locally-denoising inductive bias cannot well explain the inductive bias of real diffusion model.
Methods And Evaluation Criteria: The authors demonstrate the similarity between the proposed patch denoiser and the real diffusion models by comparing the denoiser outputs, which might not be equivalent to the generalization capability. For example, although Figures 13-20 demonstrate PSPC can generate similar denoiser outputs as those of the diffusion models, PSPC fail to generate high quality samples (Figure 21). This suggests there exists a huge gap between the generalization ability of the models and the denoising ability of the models. Due to this gap, the authors should propose alternative metrics for measuring the generalization ability.
Another issue is the authors only compare PSPC with the Gaussian denoiser. More baselines, such as the closed-form diffusion models and other types of patch-based diffusion models in the paper should be included.
Theoretical Claims: The theoretical claims have no obvious issues.
Experimental Designs Or Analyses: I have doubt on measuring the generalization ability of different models using the MSE of denoiser outputs. The detailed reason can be found in the "Methods And Evaluation Criteria" section.
Supplementary Material: I review all the supplementary material.
Relation To Broader Scientific Literature: Previous works have shown that diffusion models have certain inductive bias that enable generalization. The authors try to characterize the properties of such inductive bias. They attribute such inductive bias to the locally denoising operation. The findings are novel.
Essential References Not Discussed: I am not aware of any essential references that are not discussed.
Other Strengths And Weaknesses: No comments.
Other Comments Or Suggestions: Are the labels in Figure 8 correct? Which row corresponds to DDPM++ ?
Questions For Authors: 1. Please clarify the gap between denoising ability and generalization ability. Why the proposed patch denoiser well-matches the diffusion models in Figure 13-20 but diverges significantly in Figure 21? Such gap is not adequately addressed in the current paper. This raises concerns on how much the locally denoising mechanism contributes to the generalization ability.
2. Please compare the proposed denoiser with more (classical patch based) denoisers.
3. Why PCSC shares the highest similarity with the Gaussian denoiser rather than the diffusion model (Figure 9)?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ### **General Response**
We'd like to thank the reviewer for taking the time to read and review or work. Please find our responses to your review below. If there are any items which you believe have not been addressed, we welcome this feedback.
### **Other Comments Or Suggestions:**
> Are the labels in Figure 8 correct? Which row corresponds to DDPM++ ?
The labels in Figure 8 are correct, the middle row of each subplot corresponds to DDPM++ (An EDM architecture) evaluated on the $\mathbf{z}$ value in the top row. The left column of subplots show sampling trajectories using the DDPM++ denoiser, while the right column shows sampling trajectories using the PSPC-Flex denoiser.
### **Questions For Authors:**
> 1. Please clarify the gap between denoising ability and generalization ability. Why the proposed patch denoiser well-matches the diffusion models in Figure 13-20 but diverges significantly in Figure 21? Such gap is not adequately addressed in the current paper. This raises concerns on how much the locally denoising mechanism contributes to the generalization ability.
The reviewer is correct that the samples produced by PSPC are significantly different to those produced by network denoisers. Although we tried our best to match the output of network denoisers as closely as possible, our method is an imperfect approximation to their behaviour. There is a significant difference between our fully empirical method and a deep neural network trained with gradient descent over millions of $\mathbf{z}$ samples. We believe the remarkable similarity of our method our denoiser presents compelling evidence that network denoisers may employ similar generalization mechanisms.
Figure 1 & 6 illustrate that significant differences remain between PSPC and network denoisers, especially for intermediate $t$. When sampling with PSPC-Flex, these differences in denoiser outputs compound to result in PSPC-Flex PF-ODE trajectories which slowly drift from the PF-ODE trajectories of network denoisers. We discuss this briefly in section 6.2, line 371.
This process of error accumulation is visualized in Figure 8 which illustrates the denoiser outputs of PSPC-Flex and DDPM++ on shared $\mathbf{z}$ inputs drawn from DDPM++ (left) and PSPC-Flex (Right) PF-ODE trajectories. Comparing the network and PSPC denoisers, we can see that for all cases and $t$, the outputs are highly similar. However, differences in two, which are most pronounced in the middle of the trajectory, result in substantially worse samples in the right column of figure 8 (PSPC-Flex) than the left column (DDPM++). Despite the artifacts present in PSPC samples, we highlight that the structure of the final samples in the right column of Figure 8 are similar to the final network samples in the right column.
Notably, even as sample quality degrades (ie right column, t < 0.5) both denoisers produce similar denoiser outputs. These degraded $\mathbf{z}$ samples are obviously outside the training distribution $p_t(\mathbf{z})$ and are clear examples of neural network generalization. The fact that PSPC-Flex outputs are similar to network denoisers in this case provides further evidence that a local denoising mechanism may be partially responsible for the generalization of diffusion denoisers.
> 2. Please compare the proposed denoiser with more (classical patch based) denoisers.
In the methods & evaluation criteria section of your review, you specifically mention Closed-Form Diffusion Models (Scarvellis et al. 2023) as a missing baseline method. We have therefore implemented this method and added it as a baseline to Figure 6 [here](https://drive.google.com/file/d/1esmDa7DP2rPNjPShzRCh8UTPUUMzauPe/view?usp=sharing). We used hyperparameters $\sigma=0.1, M=2$, which they report in Appendix section C2. for their CelebA results. However, we did not find that CFDM outputs were significantly different to the optimal denoiser with these settings.
If there are other baselines you believe are necessary, please cite them explicitly and we will try our best to implement them by the end of the discussion period.
> 3. Why PCSC shares the highest similarity with the Gaussian denoiser rather than the diffusion model (Figure 9)?
We are unsure as to why PSPC-Flex samples are more similar to the Gaussian denoiser. Recently, several methods have been proposed to explain diffusion generalization, including our work, the optimal Gaussian denoiser, and the geometrically adaptive harmonic bases of Kadkhodaie et. al (2023). Understanding the similarities between these methodologies is an interesting direction for future research.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors response. Below, I will add my additional comments.
1. My concern on the choice of using denoising outputs to measure the similarity between diffusion denosier and the proposed denoiser since though the proposed method produce highly similar denoising outputs are similar, the generated quality is much worse. Since we are interested in using the model to generate images, not perform denoising, I think the evaluation in the paper should focus more on the generating ability. I would like to see the authors evaluating the proposed method using metrics like FID, IS, Recall, Precision, which are commonly used for evaluating generative models.
2. Related to my first point, since the authors evaluate the similarity between proposed denoisers and diffusion models using the denoising outputs, I am interested in whether some traditional patch-based denoisers can generate images as well. For example, can BM3d generate images that are very close those of diffusion models?
3. It is still kind of strange to me that PCSC is closer to the Gaussian denoiser compared to the actual diffusion models since it seems the Gaussian denoiser does not use the locality inductive bias. It would be nice if the authors can provide some visualizations of its samples and compare with those of PCSC. Similarly, I'd like to see the generated samples of traditional method such as BM3d and the references mentioned in section 7, patch-based denoising section (it is sufficient to just show me one or two algorithms). I would also appreciate it if the authors can visualize the samples generated by the closed-form diffusion algorithm.
#######################################################################################################
Overall I think this work brings novel and interesting insights (locality bias) into the current progress of understanding diffusion generation. But the way the locality is implemented by PCSC might not be optimal as the generated samples do not look good enough on Cifar-10. Though I believe locality is an important inductive bias of diffusion model, it remains unclear how much locality contributes to the generalization. That being said, I do think this work is meaningful and worth being published.
---
Reply to Comment 1.1.1:
Comment: We'd like to thank the reviewer for their consideration of our rebuttal. We also appreciate their praise that our work is meaningful, novel, interesting, and worthy of publication. Below we have addressed each item of the reviewer's additional comments.
### **1. Sample Quality Metrics**
We slightly disagree with the reviewer’s statement:
> Since we are interested in using the model to generate images, not perform denoising
While this is certainly true when designing a generative model, this was _not_ our goal when designing PSPC. Instead, our aim was to understand how diffusion models generalize. As diffusion samples are the result of sequential denoiser generalizations, we believe that individually analyzing each step of the sampling procedure is needed to understand the generalization process as a whole. In this context, we believe PSPC’s “highly similar denoising outputs” provide strong evidence to support our hypothesis that local denoising is a key component of this generalization process.
We fully agree with the reviewer that the implementation of locality in PSPC is almost certainly not optimal. As it stands, the sample quality of PSPC is too poor to be used as a generative model. In the future, we believe understanding the autoregressive drift mentioned in our original rebuttal is necessary if we wish to improve PSPC _into_ a reasonable generative model.
It is clear that the distributional metrics mentioned by the reviewer should be optimized if we wish to improve the sample quality of PSPC. However, we note that these metrics are challenging to optimize as they give no feedback as to when or how individual sampling trajectories deviate from network baselines. For this reason, we believe reporting MSE over the entire reverse process is a more useful metric as it characterizes exactly where these deviations occur. It is our view that this characterization is an important first step in any future work to improve PSPC into a useful generative model.
### **2. Traditional Patch Denoisers**
We have included BM3d as a baseline in our response to the reviewers third point. However, we’d also like to clarify two differences between diffusion denoisers and the classical denoisers we reference in section 7
1. Classical denoisers are interested in sampling an image $\\mathbf{x}$ given a noisy $\\mathbf{z}$. That is, their aim is $\\mathbf{x} \\sim p_t(\\mathbf{x} | \\mathbf{z})$. By contrast, diffusion denoisers must estimate the posterior mean $\\mathbb{E}[\mathbf{x} | \mathbf{z}, t]$. While these objectives are similar for sufficiently low $t$, at intermediate noise levels and above the problems are distinct. We would not expect classical denoisers to produce reasonable posterior mean estimates for large values of $t$.
2. The noise levels on which diffusion models are trained are generally much higher than classical methods. For example, in “Field of Experts” (Roth & Black 2005), they evaluate denoising up to a maximum PSNR of 8.13, corresponding to approximately $t=0.8$ in our work.
These challenges are demonstrated when using BM3d on images with higher amounts of noise. While performance for small $t$ is good, BM3d’s performance suffers beyond $t=0.1$[[Link](https://drive.google.com/file/d/1YtqRcjpzE4cuWKrtvRMyw8aMwx1HuKBf)].
We will include our elaboration on the differences between traditional and diffusion denoising problems in section 7 of any camera ready revision.
### **3. Sample Visualizations**
While the Gaussian denoiser does not explicitly utilize local operations to the same degree as PSPC, it is worth noting that their denoiser is primarily built upon the covariance matrix of the training dataset which captures local correlations in the data. We suspect that this is the primary reason that our denoisers have similar behaviour. Although more thorough investigation is required to test that hypothesis, we believe that it is an exciting avenue for future research.
In response to the reviewer's request, we have compiled samples from a number of denoisers [[Link](https://drive.google.com/file/d/11oKbig7zhr6r-zKwjXNGHmrpdDdZVm8-)].
Examining the figure, Gaussian samples do share a remarkable structural similarity to those of PSPC, but with higher saturation. Looking at the samples of closed-form diffusion models, the samples are high quality, but this is because they are all exact training set copies. Notably however, they are not always the same images as those produced by the empirical denoiser. Finally, the quality of the BM3d samples demonstrates that it is unsuitable as a diffusion denoiser.
We will include this figure in the supplementary of any camera ready revision. | Summary: This paper attempts to explain the mechanism behind the ability of diffusion models to generalize beyond the training data. It starts by pointing out that this ability is due to the neural denoiser deviating from the optimal empirical denoiser (the optimal denoiser for the training set). It then shows that the function learned by neural denoisers is more similar to local empirical denoisers, namely ones which operate on small-sized patches rather than on the whole image. The paper therefore concludes that it is this tendency of neural denoisers to learn local operations that enables diffusion models to generalize well.
## update after rebuttal
The rebuttal has answered my questions. I keep my original score.
Claims And Evidence: The claims are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The methods and evaluation criteria make sense.
Theoretical Claims: The paper doesn't present any formal theoretical claims in the forms of theorems or lemmas.
Experimental Designs Or Analyses: I checked the experimental settings for all the experiments. In general, all experiments seem valid to me.
There are, however, several missing details that I think are important to mention and discuss:
- What was the dataset of patches used to construct each empirical patch denoiser (both for PSPC-patch and for PSPC-square)? For example, when constructing the denoised patch at location (i,j), does the computation involve all the overlapping patches extracted from all the training images, or only the patches at location (i,j) extracted from all the training images (namely, only one patch per training image)?
- Were denoised patches constructed only for the patches that are fully contained within the 64x64 image, or did patches at the boundaries also contain zero-padded regions?
These points are important for understanding how the sampling process with PSPC manages to generate sky at the top, grass at the bottom, etc. If when denoising each patch, the computation involves patches extracted from all spatial locations within the training images, and there are no cues from padding, then the statistics for each patch is the same. Namely, patches at the upper part of the generated image should not necessarily favor sky, and patches at the bottom should not necessarily favor grass. All patches are obtained using the same operation applied to the input.
It would be great if the authors can clarify those points.
Supplementary Material: I reviewed all the supplementary material.
Relation To Broader Scientific Literature: Understand generalization in diffusion models is a topic of vast interest, both theoretically, and with empirically.
This paper presents convincing evidence that at least part of the generalization capabilities of diffusion models is associated with the tendency of neural denoisers to learn local processing. Interestingly, the tendency to learn local processing is not a feature of a particular architecture. Even DiTs, whose architecture doesn't promote this implicit bias, learn local processing. These observations are certainly interesting to the community and may draw follow-up works that attempt to explain this tendency.
Essential References Not Discussed: The paper draws connections to classical patch-based image restoration methods. But there were also quite a few patch-based image generation methods. Most of them in the context of learning from a single image. Starting from the classical texture-generation paper:
- J. De Bonet, "Multiresolution sampling procedure for analysis and synthesis of texture images", SIGGRAPH'97.
To more recent papers, like:
- N. Granot, et al. "Drop the GAN: In defense of patches nearest neighbors as single image generative models", CVPR'22.
These are also related to methods that learn patch statistics using GANs (e.g. SinGAN) or using diffusion models (e.g. SinFusion, SinDiffusion, SinDDM).
Other Strengths And Weaknesses: As mentioned above, this paper's main strength is that it provides convincing experiments that point to a plausible explanation for generalization in diffusion models.
A weakness is that the paper does not explain, and does not provide supporting experiments, how local processing induces coherent global structure. If when denoising a patch, the denoiser doesn't know from where in the image that patch was extracted, how does the denoiser know whether this patch is more likely to contain sky (if it was extracted from the upper part of the image) or grass (if it was extracted from the lower part of the image)?
Other Comments Or Suggestions: Typo: It seems that in Eq. (8), the input to $D\theta$ should be $z$ rather than $x$.
Questions For Authors: See comments in the sections above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ### **General Comment**
We'd like to thank the reviewer for taking the time to read and review our paper. Furthermore, we appreciate the additional references that the reviewer has brought to our attention. Below, we have responded to several items of your review. If there are items which you feel have not been sufficiently addressed, we would welcome additional discussion on these matters.
### **Experimental Designs or Analyses:**
>There are, however, several missing details that I think are important to mention and discuss:
We have clarified both of the details below. Although we believe that our definitions of $C_s$ on line 199, $C_{G(t, \lambda)}$ on line 316, as well as equations 9,10 and 11 address these details, we will update the text in the camera ready revision to improve clarity on these items
> - What was the dataset of patches used to construct each empirical patch denoiser (both for PSPC-patch and for PSPC-square)? For example, when constructing the denoised patch at location (i,j), does the computation involve all the overlapping patches extracted from all the training images, or only the patches at location (i,j) extracted from all the training images (namely, only one patch per training image)?
For both PSPC-Flex and PSPC-Square, we construct an empirical patch set for each patch posterior mean. In the case of PSPC-Flex, this corresponds to one patch set for each pixel in the input image (ie $32^2$ patch sets for CIFAR-10 and $64^2$ patch sets for AFHQ & FFHQ). For PSPC-Square, we create a dataset for each possible square patch location with no padding and a stride of one. The total number of patch sets created for PSPC-Square varies depending on patch size. In general, we create $(h - s + 1)^2$ patch sets, where $h$ is the height of the image in pixels and $s$ is the height of the square patch in pixels. For both PSPC-Flex and PSPC-Square, each patch set is created by applying the same cropping matrix to each image in the training set. This results in patch sets with one spatially localized patch per training set image.
> - Were denoised patches constructed only for the patches that are fully contained within the 64x64 image, or did patches at the boundaries also contain zero-padded regions?
We did not use padding in our computations.
### **Essential References Not Discussed:**
>The paper draws connections to classical patch-based image restoration methods. But there were also quite a few patch-based image generation methods. Most of them in the context of learning from a single image. Starting from the classical texture-generation paper:
> - J. De Bonet, "Multiresolution sampling procedure for analysis and synthesis of texture images", SIGGRAPH'97.
>
>To more recent papers, like:
> - N. Granot, et al. "Drop the GAN: In defense of patches nearest neighbors as single image generative models", CVPR'22.
>
>These are also related to methods that learn patch statistics using GANs (e.g. SinGAN) or using diffusion models (e.g. SinFusion, SinDiffusion, SinDDM).
We thank the reviewer for drawing our attention to these highly relevant references. We will include the references mentioned in the camera ready version of the paper.
### **Other Strengths And Weaknesses:**
>A weakness is that the paper does not explain, and does not provide supporting experiments, how local processing induces coherent global structure. If when denoising a patch, the denoiser doesn't know from where in the image that patch was extracted, how does the denoiser know whether this patch is more likely to contain sky (if it was extracted from the upper part of the image) or grass (if it was extracted from the lower part of the image)?
As mentioned in the experimental design, we use spatially consistent patches for each patch posterior mean. This means that for example, when denoising patches in the upper portion of the image, the patch dataset contains patches which are more likely to contain sky than grass. We found this detail to be especially important for localized datasets such as FFHQ where facial features (noses, eyes, mouths etc.) are located in specific spatial regions of the image.
Another key finding of our work is that the ideal patch size and therefore the degree of locality is anti-correlated with the level of noise. When generating a sample with PSPC-flex or PSPC-square, we initially denoise using large patches before moving to smaller patches. The larger patches therefore condition the posterior means of the later, smaller patch denoising operations. We believe this large to small patch hierarchy is important for ensuring coherent global structure
### **Other Comments Or Suggestions:**
>Typo: It seems that in Eq. (8), the input to $D_\\theta$ should be $\\mathbf{z}$ rather than $\\mathbf{x}$
Thank you for this observation. You are correct, the input to $D_\\theta$ is $\\mathbf{z}$. We will correct this mistake in the camera ready.
---
Rebuttal Comment 1.1:
Comment: I thank the reviewers for the detailed answers. These have addressed my concerns, and would be best if clarified and discussed in the paper. I keep my original rating. | Summary: That real diffusion models generalize their training data, rather than memorize it, is not obvious: the optimal solution to a typical denoising score matching objective is the score of the (empirical) data distribution, which can only reproduce training examples. Why do diffusion models generalize, and what inductive biases determine how they generalize? The authors of this paper propose that denoising happens locally, in 'patches', rather than globally, and that this is at least partly responsible for generalization. They validate this hypothesis through a variety of numerical experiments, which show that their patch denoiser looks more like real denoisers than the 'optimal' one.
Claims And Evidence: The paper has a clear hypothesis and collects an impressive set of empirical results to validate it.
Methods And Evaluation Criteria: The methods by which the authors test their hypothesis are reasonable, and their evaluation criteria (a mix of comparing denoisers and looking at generated samples) make sense.
Theoretical Claims: The authors make no major new theoretical claims.
Experimental Designs Or Analyses: The experiments the authors run are well-designed and appear sound.
Supplementary Material: I read the SI, which is short and mostly contains additional illustrative results.
Relation To Broader Scientific Literature: This paper makes a major contribution to the study of diffusion model generalization/memorization, which is itself a fairly major subfield of diffusion models. It also contributes meaningfully to literature on how generative models generalize and produce 'creative' output. Their main claim is kind of similar to that of Kamb and Ganguli, which they mention, but the details of their approach are a bit different, and it is likely that the two works developed somewhat independently.
Essential References Not Discussed: No references come to mind.
Other Strengths And Weaknesses: This paper is well-written, clear, and has well-made figures. It was a joy to read.
As a very minor point, it could be helpful to include some additional discussion of what these findings may mean. This inductive bias ('locality', to use Kamb and Ganguli's terminology) seems to be true for at least U-net- and vision-transformer-based diffusion models trained on images. What about other kinds of architectures or data sets?
Other Comments Or Suggestions: line ~158: "plot four such heatmaps in Figure 3" needs a period
line ~359: "can be found in Appendix D" needs a period
Questions For Authors: What kind of architectures does one expect to exhibit this inductive bias? If one trained a fully-connected MLP-like architecture to denoise images, would it exhibit this bias?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: ### **General Comment**
We'd like to thank the reviewer for their thoughtful consideration of our work. Below, we have responded to specific components of your review. If you have additional questions, we would welcome further discussion.
### **Relation To Broader Scientific Literature:**
>This paper makes a major contribution to the study of diffusion model generalization/memorization, which is itself a fairly major subfield of diffusion models. It also contributes meaningfully to literature on how generative models generalize and produce 'creative' output. Their main claim is kind of similar to that of Kamb and Ganguli, which they mention, but the details of their approach are a bit different, and it is likely that the two works developed somewhat independently.
Although our method shares many similarities to Kamb & Ganguli, our work is entirely concurrent and independent. There are some differences between our methodologies and experimental results which we will highlight here, but plan to elaborate on in our camera ready revision.
The primary difference between our works is that Kamb & Ganguli utilize shared patch distributions whereas we utilize patch distributions which are localized to specific image locations. We found that this localization was essential for FFHQ & AFHQ where the subject is centered within the image. In addition, Kamb & Ganguli utilize square patches exclusively, while PSPC-Flex uses localized, adaptive patch shapes and sizes. Experimentally, while Kamb & Ganguli restrict their analysis to only convolutional networks, we find similar generalization patterns exist even in convolution free architectures such as DiT and UViT.
### **Other Strengths And Weaknesses:**
>As a very minor point, it could be helpful to include some additional discussion of what these findings may mean. This inductive bias ('locality', to use Kamb and Ganguli's terminology) seems to be true for at least U-net- and vision-transformer-based diffusion models trained on images. What about other kinds of architectures or data sets?
Please see our response to this point in the Questions for Authors section
### **Other Comments Or Suggestions:**
>line ~158: "plot four such heatmaps in Figure 3" needs a period
>
>line ~359: "can be found in Appendix D" needs a period
We'd like to thank the reviewer for identifying these items. We will fix both in the camera ready revision.
### **Questions For Authors:**
> What kind of architectures does one expect to exhibit this inductive bias? If one trained a fully-connected MLP-like architecture to denoise images, would it exhibit this bias?
This is an interesting open question in our opinion. Although more rigorous study is required, we would guess that the inductive biases are probably due to interplay between the diffusion process and the correlations in the data. For example, in datasets where the features are not locally correlated, (ie shuffling the data dimensions), we would not expect this local inductive bias to be useful.
Our observations in this paper seem to suggest that network architectures do not significantly affect the generalization behaviour. We would therefore expect MLPs to have similar properties to DiT & U-Net architectures when trained on the same data. This hypothesis is somewhat supported by prior art. For example, Li et al. (2024) show that the optimal linear denoiser is the Gaussian denoiser which produces outputs which are quite similar to our method. | null | null | null | null | null | null |
Improving Generalization in Federated Learning with Highly Heterogeneous Data via Momentum-Based Stochastic Controlled Weight Averaging | Accept (poster) | Summary: Generalization capability is critical for FL in real-world applications. This paper revisits the generalization problem in FL, focusing on the impact of data heterogeneity. The authors propose FedSWA, which uses Stochastic Weight Averaging to find flatter minima, and a varaint FedMoSWA, a momentum-based variant designed to better align local and global models. Theoretical analysis provides convergence and generalization bounds for both algorithms, showing that FedMoSWA achieves smaller generalization errors compared to FedSAM and its variants. Empirical experiments on CIFAR10/100 and Tiny ImageNet demonstrate the superiority of the proposed methods.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: some of
Experimental Designs Or Analyses: Lacking comparison to
FedSMOO: Dynamic regularized sharpness aware minimization in federated learning: Approaching global consistency and smooth landscape
Fedgamma: Federated learning with global sharpness-aware minimization
Supplementary Material: yes, some of proof and loss visualization
Relation To Broader Scientific Literature: Try to improve generalization ability of FL in real world scenarios.
Essential References Not Discussed: lacking comparison to FedSMOO, and FedGAMMA
FedSMOO: Dynamic regularized sharpness aware minimization in federated learning: Approaching global consistency and smooth landscape
Fedgamma: Federated learning with global sharpness-aware minimization
Other Strengths And Weaknesses: Strengths:
1) Well written and good representation
2) Good results, although lacking comparison to FedSMOO, and FedGAMMA
3) Providing extensive theoretical analysis
Weaknesses:
only one major weakness is lacking comparison to fedsmoo and fedgamma
Other Comments Or Suggestions: NA
Questions For Authors: Can you provide comparison to FedSMOO and FedGAMMA?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. We will make modifications in the final version as suggested by the reviewer, and our point by point responses to your major comments are given below.
To address your concern, we conducted many experiments on CIFAR-100 with ResNet-18 under different data heterogeneity levels (0.1, 0.3, 0.6), and compare the proposed algorithms with FedSMOO and FedGAMMA, suggested by the Reviewer. The reproduced results are based on the pre-activation ResNet18-GN network, which is widely used in federated learning algorithms such as FedSAM, FedGAMMA, and FedACG. The following results are produced under our framework, referencing the open-source code of FedSMOO and FedSAM. Because the FedGAMMA algorithm lacks open-source code, we implemented it based on the algorithm flow described in the paper and referenced the FedGamma implementation in FedLESAM's source code. We will include these results in the final version, and will release our code and the federated learning framework to ensure reproducibility.
| CIFAR-100| 0.1 | 0.3 | 0.6 |
|----------|-------|-------|-------|
| FedSMOO | 46.5 | 47.8 | 49.2 |
| FedGAMMA | 48.4 | 51.8 | 52.6 |
| FedSWA (ours) | 50.3 | 55.5 | 59.8 |
| FedMoSWA (ours) | 61.9 | 66.2 | 67.9 |
---
Rebuttal Comment 1.1:
Comment: Thanks! I have no further questions.
---
Reply to Comment 1.1.1:
Comment: I would like to sincerely thank you for your thoughtful and detailed feedback on our paper. We greatly appreciate the time and effort you dedicated to reviewing our work.
Your suggestions and comments were extremely helpful in improving the quality of our research, and we have carefully addressed them in our revised paper. We believe these revisions have strengthened our paper and made it more robust.
Thank you once again for your invaluable input. We are grateful for your constructive criticism, which has been instrumental in improving the quality of our work. | Summary: Tackles generalization issues in FL with highly heterogeneous data.
Introduces a new momentum-based stochastic controlled weight averaging FL algorithm.
Claims And Evidence: Provides some theoretical guarantees and empirical results.
Methods And Evaluation Criteria: Evaluations are conducted on different data sets and ,methods, but the improvemnt with this method is minimal.
Theoretical Claims: Provides both theoretical guarantees (convergence and generalization bounds) and solid empirical results.
Experimental Designs Or Analyses: Experiemtns are conducted on several data sets and models.
Supplementary Material: supplementary material incudes more theoretical analysis.
Relation To Broader Scientific Literature: Related to problem of data heteroginity.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: The proposed methods (FedSWA and FedMoSWA) appear to offer incremental improvements over existing approaches like FedSAM and MoFedSAM. The results don't show consistant improvement.
Other Comments Or Suggestions: The experiments focus primarily on benchmark datasets under simulated conditions (Dirichlet-). I'd like to see results in imbalanced cifar.
Questions For Authors: The source code of the method should be included.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. We will make modifications in the final version as suggested by the reviewer, and our point by point responses to your major comments are given below.
1. Different from FedSAM, the proposed FedSWA algorithm is a new way to improving FL generalization. By using SWA, FedSWA better maintains global flatness consistency compared to FedSAM, as shown in Figure 1. Moreover, FedMoSWA also uses momentum stochastic control, unlike MoFedSAM with only local momentum SAM. Both techniques (i.e., SWA and momentum stochastic control) significantly enhance the theoretical results of the federated learning algorithms such as FedSAM and MoFedSAM, as shown in Table 1.
Experimental results: All the experimental results in the manuscript show that our algorithms show consistent improvements on both ResNet and Transformer networks. As shown in Tables 2, 3 and the followng table, FedSWA with SWA optimizer (59.8\%) outperforms FedAvg (54.2\%) and FedSAM (47.8\%) on ResNet-18 with CIFAR-100. Our FedMoSWA (67.9\%) also surpasses MoFedSAM (60.1\%). Both theoretical and experimental results show that FedSWA outperforms FedSAM, and FedMoSWA outperforms MoFedSAM.
| Algorithm | Accuracy (\%) | Improvement (\%) |
|----------|--------------|-----------------|
| FedAvg | 54.2 | - |
| SCAFFOLD | 54.1 | -0.1 |
| FedSAM | 47.8 | -6.4 |
| MoFedSAM | 60.1 | +5.9 |
|Our FedSWA | 59.8 | +5.6 |
|Our FedMoSWA | 67.9 | +13.7 |
Our theoretical results in Table 1 show that the generalization bound of our FedSWA is $\mathcal{O}\big(\frac{L}{m n \beta}e^{\frac{1}{T}+1}(\tilde{c} L+ \tilde{c}\sigma_g+\tilde{c} \sigma) \big)$, which is correlated with data heterogeneity and is superior to that of FedSAM, $
\mathcal{O}\big(\frac{L}{m n \beta}e^{1+\frac{1}{T}}(\overline{c}L+ \overline{c}\sigma_g+\overline{c} \sigma) \big)$ and MoFedSAM, $
\mathcal{O}\big(\frac{L}{m n \beta}e^{1+\frac{1}{T}}(\overline{c}L+ \overline{c}\sigma_g+\overline{c} \sigma) \big)$, where $\overline{c}\>\tilde{c}\=\1+\left(2+1/KT\right)^{K-1}/T\gg 1$.
We prove that the generalization error of FedMoSWA is $\mathcal{O}\big(\frac{L}{m n\beta} e^{\frac{1}{T}+1}(\tilde{c} L+ \sigma_g+\tilde{c} \sigma) \big)$, which is better than those of FedSAM, MoFedSAM, FedSWA, where $\tilde{c}=1+\left(2+1/KT\right)^{K-1}/T\gg 1$.
2. To address youe concern, we conducted some experiments on CIFAR-100 with ResNet-18, following the pathological imbalanced settings from the FedSMOO and FedLESAM papers, where each client has at most 10 classes. This represents a highly imbalanced data distribution. These results will be included in our final paper.
Our FedSWA algorithm outperforms the algorithms, FedAvg and FedSAM. FedSWA uses SWA as the local optimizer, while FedAvg uses SGD, and FedSAM uses SAM as local optimizers, and none of them employ variance reduction or momentum acceleration techniques. As the advanced version of FedSWA, our FedMoSWA, which also incorporates momentum and variance reduction, surpasses other algorithms in all the settings.
| Algorithm | Accuracy (\%) |Improvement (\%) |
|----------|--------------|--------------|
| FedAvg | 42.7 |- |
| FedDyn | 49.1 | + 6.4 |
| SCAFFOLD | 43.1 | + 0.4 |
| FedSAM | 41.2 |- 1.5 |
| MoFedSAM | 45.6 |+ 2.9 |
| FedLESAM | 44.3 |+1.6 |
| FedACG | 52.6 |+9.9 |
| FedSWA (ours) | 48.3 |+5.6 |
| FedMoSWA(ours) | 55.5 |+12.8 |
3.In fact, our source code was included in the supplementary materials. We will release our code and the federated learning framework to ensure reproducibility. | Summary: This paper proposes two novel algorithms for improving generalization in federated learning.
The first approach, FedSWA, is a variant of FedAvg with stochastic weight averaging, a method known for finding flatter minimums.
The second approach, FedMoSWA, extends FedSWA with control variates that are updates using momentum, in order to handle heterogeneity.
Both approaches are studied theoretically, showing superior generalization guarantees than FedSAM and MoFedSAM.
Extensive numerical experiments confirm these findings.
Claims And Evidence: Theoretical claims are provided with full proofs.
Numerical claims provide both intuitive explanations and more thorough comparisons with many existing methods on multiple datasets and in multiple settings.
Methods And Evaluation Criteria: Benchmark datasets make sense, as well as the splits used to emulate heterogeneity.
Theoretical Claims: The proofs for optimization errors seem correct, and seem to be widely inspired by the proofs from Karimireedy et al., 2020.
I am less familiar with analyses of generalization error and I did not check them in details, but the results seem correct.
Experimental Designs Or Analyses: Experimental analyses are quite extensive, with comparison with many other algorithms on multiple datasets.
It seems that the only hyperparameters that were tuned are the client learning rate $\eta_\ell$, global step size $\alpha$, momentum step size $\gamma$ and local learning rate decay $\rho$ for FedSWA and FedMoSWA. In particular, it is not clear to me whether hyperparameters of other methods have been tuned or not, which could make the comparison unfair.
More specifically, the client selection rate is set to an arbitrary value. To my knowledge, most of the baselines (e.g. Scaffold) are known to underperform when selecting only a fraction of the clients at each rounds of communication.
There are thus two major differences between Scaffold and FedMoSWA, that should be studied in isolation: (i) the use of stochastic weight averaging, and (ii) momentum in control variates updates.
It is therefore not clear whether superiority observed in experiments is due to the improvement of (i) or (ii).
This is a bit concerning, especially seeing that FedSWA is closer to the baselines in Tables 2 and 3, suggesting that improvement over methods like Scaffold may come from this momentum stochastic control.
Performing experiments with full participation would clarify this question.
Supplementary Material: I skimmed through the supplementary material and did not identify errors.
Relation To Broader Scientific Literature: Related scientific literature is widely and appropriately discussed.
Essential References Not Discussed: Not to my knowledge.
Other Strengths And Weaknesses: **Strengths**
1. The two proposed methods are shown to achieve better generalization error theoretically and numerically.
2. The paper is very well written, with precise discussion describing intuition about the studied phenomenons.
3. FedMoSWA is shown to outperform many baselines numerically.
**Weaknesses**
1. There may be a lack of precision in the numerical analysis, which does not allow to distinguish whether FedMoSWA outperforms baselines due to SWA or due to the use of momentum in control variates (see Experimental Design or Analyses section).
2. The differences in optimization errors, notably between methods presented in Table 1, is not discussed. Providing discussion on this, specifically discussing whether this should have an impact on the result or not, would greatly improve this part of the discussion.
Other Comments Or Suggestions: The difference between FedSWA and FedMoSWA in Algorithm 1 is only shown with using different colors.
This is a problem when reading the paper in black and white or for the colorblind: the difference should be indicated in another way (on top of using colors) for accessibility reasons.
Questions For Authors: In the end, the generalization error still depends on heterogeneity. Surprisingly, this term does not disappear when taking only one local training step. Is it an artefact of the analysis, or is there something fundamental that remains even when doing only one local step?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the Reviewer for their valuable comments, and our point by point responses to your major comments are given below.
1. For Experimental Designs Or Analyses. In fact, we have done the hyperparameter tuning for all the algorithms, and we also followed the parameter settings from their original papers. We conducted full client participation experiments on CIFAR-100 with 10 clients and 300 rounds, where the model is ResNet-18. Here, FedMo denotes FedMoSWA without SWA (i) but only with Momentum Stochastic Control (ii), as shown in the following table. FedMo (62.5\%) outperforms SCAFFOLD (59.9\%) by 2.6\%. With 100 clients and 10\% participation, our FedMoSWA achieves 3.3\% higher accuracy than SCAFFOLD, addressing the update delay issue with partial client participation.
Full client participation on CIFAR-100 and ResNet-18 with 10 clients and 300 rounds:
| Algorithm | Accuracy (\%) | Improvement (\%) |
|----------|--------------| -----------------|
| FedAvg | 58.2 | - |
| FedDyn | 58.5 | +0.3 |
| SCAFFOLD | 59.9 | +1.7 |
| FedSAM | 48.3 | -9.9 |
| MoFedSAM | 37.9 | -20.3 |
| FedLESAM | 59.2 | +1.0 |
| FedACG | 60.9 | +2.7 |
| Our FedSWA (i) | 60.2 | +2.0 |
| Our FedMo (ii) | 62.5 | +4.3 |
|Our FedMoSWA (i)+(ii) | 63.2 | +5.0 |
2. For Weaknesses 1. From Tables 1 and 4 in our manuscript and the following table , when \(\rho=0.1\), FedSWA using only SWA (59.8\%) outperforms FedAvg (54.2\%) and FedSAM (47.8\%) on ResNet-18 with CIFAR-100. When \(\rho=1\), FedMo using only momentum stochastic control achieves 65.9\%, higher than SCAFFOLD (54.1\%), demonstrating that momentum variance reduction mitigates SCAFFOLD’s variance reduction delay. By combining SWA and momentum variance reduction, FedMoSWA achieves 67.9\%, showing that both SWA (+5.6\%) and momentum control (+11.7\%) are effective, and momentum control has greater impact. Moreover, FedSWA is a simple algorithm like FedSAM and can be combined with other techniques. We will clarify this in our final paper.
10% client participation on CIFAR-100 and ResNet-18 with 100 clients and 1000 rounds:
| Algorithm | Accuracy (\%) | Improvement (\%) |
|----------|--------------|-----------------|
| FedAvg | 54.2 | - |
| SCAFFOLD | 54.1 | -0.1 |
| FedSAM | 47.8 | -6.4 |
| MoFedSAM | 60.1 | +5.9 |
| FedSWA (ours) | 59.8 | +5.6 |
| FedMo (ours) | 65.9 | +11.7 |
| FedMoSWA (ours)| 67.9 | +13.7 |
3. For Weaknesses 2. To address your concern, we will add some discussions about the optimization error analysis in our final paper. In fact, Section 5.2 discusses the optimization error analysis, showing that FedMoSWA converges faster than the best-known algorithm, SCAFFOLD, and outperforms both FedSWA and other baselines such as FedSAM, FedAvg, and MoFedSAM. Unlike MoFedSAM, which uses only local momentum, our FedMoSWA employs momentum variance reduction. Additionally, our FedSWA also converges faster than both FedSAM and FedAvg, as shown in Table 1. We will include all the discussions in Section of Introduction in our final paper.
4. For Other Comments Or Suggestions. To address your concern, we will improve the final version of the paper, and will use italic and bold to distinguish the two algorithms.
5. For Questions. In our analysis, data heterogeneity does not vanish when performing only one local training step. It is not purely an artifact of analysis but a fundamental characteristic of federated learning algorithms, driven by the inherent bias introduced by local training on heterogeneous data. This bias does not completely vanish with only one local training step. Our future work will focus on addresssing this issue.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answer. I remain overall positive about this work, and will keep my score to 3.
> "Here, FedMo denotes FedMoSWA without SWA (i) but only with Momentum Stochastic Control (ii), as shown in the following table. FedMo (62.5%) outperforms SCAFFOLD (59.9%) by 2.6%. With 100 clients and 10% participation, our FedMoSWA achieves 3.3% higher accuracy than SCAFFOLD, addressing the update delay issue with partial client participation."
Indeed, SWA gives a slight additional performance bonus, although it seems that most of the improvement comes from the momentum in the end.
> "In our analysis, data heterogeneity does not vanish when performing only one local training step. It is not purely an artifact of analysis but a fundamental characteristic of federated learning algorithms, driven by the inherent bias introduced by local training on heterogeneous data. This bias does not completely vanish with only one local training step. "
While I agree that "fundamental characteristic of federated learning algorithms", it should not have any impact when no local training is used: in this setting, all algorithms boil down to SGD on the averaged loss, which does not suffer from heterogeneity. This suggests that there may be a flaw in the analysis (which could be fixed by future work).
---
Reply to Comment 1.1.1:
Comment: Thanks to the reviewer for the responses, which were very helpful, especially the second question.
For the first problem, the essence of the FedSWA algorithm is to find a flatter minimum, and we experimentally demonstrate that the FedSWA algorithm finds a flatter solution with better generalization than FedSAM and its variants. However, the FedSWA algorithm does not speed up the optimization process like momentum or variance reduction. So we propose the FedMoSWA algorithm, combined with momentum and variance reduction, to speed up the optimization process. To analyze the optimization algorithm, we can divide it into two processes: one process is the speed of the optimization algorithm, and the other is the generalization ability of the algorithm. FedSWA tends to address the generalization ability of the algorithm and can be used in conjunction with other speedup algorithms like scaffold, FedACG, etc. We propose the FedMoSWA algorithm to accelerate the convergence speed of FedSWA.
For the second issue, when the local iteration step is 1, the theoretical analysis cannot eliminate the influence of data heterogeneity, which seems to be a limitation of this theory. This problem has been encountered in papers [1,2]. The generalization error bounds of our algorithm are better than both of these algorithms. In the future, we will study new stability theories to improve this limitation. Under the independent and identically distributed (i.i.d.) setting ($\sigma_g=0 $), our research results are consistent with the classical results of stochastic gradient descent (SGD).
[1] Sun Z, Niu X, Wei E. Understanding generalization of federated learning via stability: Heterogeneity matters[C]//International Conference on Artificial Intelligence and Statistics. PMLR, 2024: 676-684.
[2] Sun Y, Shen L, Tao D. Understanding how consistency works in federated learning via stage-wise relaxed initialization[J]. Advances in Neural Information Processing Systems, 2023, 36: 80543-80574. | null | null | null | null | null | null | null | null |
LEAPS: A discrete neural sampler via locally equivariant networks | Accept (poster) | Summary: This paper introduces a continuous-time diffusion sampler that operates along an annealed energy path in the discrete domain. The approach is trained using a PINN-based objective as an upper bound for the Log Variance Loss. The authors propose locally invariant neural network architectures for parametrization. The method is experimentally validated on a 15x15 Ising Model.
Claims And Evidence: While the approach is theoretically sound, the empirical evidence is limited. The study includes only one experiment on the 15x15 Ising Model and lacks ablation studies for the proposed methods. Additionally, crucial details for reproducing the experiments are missing. Although a locally invariant MLP architecture is proposed, no experiments using this architecture are presented.
Methods And Evaluation Criteria: The model is evaluated based on effective sample size, two-point correlation function, and magnetization.
This makes sense, however, an estimation based on log Z, ELBO, internal Energy and entropy could be added, especially when $\mu = 0$, where these values are theoretically available.
Theoretical Claims: Theoretical claims were partially checked.
Experimental Designs Or Analyses: The experiments are not reproducible due to several reasons:
- No information is provided about the hyperparameter selection process or the final hyperparameters used, such as learning rate and the number of samples during training or evaluation.
- It is unclear whether the Ising Model is defined on a periodic or non-periodic grid.
- The value of $\mu$ in Eq. 18 is not specified, and there is no comparison to theoretically available solutions for the Free Energy, Entropy, and Internal Energy when $\mu = 0$.
Supplementary Material: The supplementary details do not sufficiently provide information on the experimental setup.
Relation To Broader Scientific Literature: The paper provides a broader relation to scientific literature but omits some relevant papers, particularly those related to discrete domain diffusion samplers.
Essential References Not Discussed: - [1] and [2] should also be cited in this context. [1] is seminal work in discrete samplers and [2] should be cited together with Nicoli et al., 2020.
- [3] and [4] are highly relevant discrete diffusion sampler papers. The Annealed Noise Distribution in [1] resembles a discrete variant of the transport proposed in this paper. [4] applies diffusion samplers in the discrete domain in statistical physics.
- [5] also proposes a PINN-based loss to learn a transport between a prior and target distribution in the continuous domain
**References:**
1. Wu, Dian, Lei Wang, and Pan Zhang. "Solving statistical mechanics using variational autoregressive networks." Physical review letters 122.8 (2019): 080602.
2. McNaughton, B., et al. "Boosting Monte Carlo simulations of spin glasses using autoregressive neural networks." Physical Review E 101.5 (2020): 053312.
3. Sanokowski, S., Hochreiter, S., & Lehner, S. A Diffusion Model Framework for Unsupervised Neural Combinatorial Optimization. In *Forty-first International Conference on Machine Learning*.
4. Sanokowski, Sebastian, et al. "Scalable Discrete Diffusion Samplers: Combinatorial Optimization and Statistical Physics." arXiv preprint arXiv:2502.08696 (2025).
5. Tian, Yifeng, Nishant Panda, and Yen Ting Lin. "Liouville Flow Importance Sampler." *arXiv preprint arXiv:2405.06672* (2024).
Other Strengths And Weaknesses: **Strengths:**
- The proposed framework and locally invariant architectures are interesting.
**Weaknesses:**
- Many ablation studies are missing, such as comparisons between LV loss vs. PINN loss and locally invariant networks vs. vanilla architectures.
- The experiments are scarce and not reproducible with the available information.
- There is no comparison to other discrete domain diffusion samplers or autoregressive samplers.
Other Comments Or Suggestions: This paper has the potential to be very impactful, but it requires more experimental evaluation and detailed descriptions of the experiments.
Experiments on Spinn Glasses as in [4] would also be nice to have.
Questions For Authors: - How does a non-locally invariant architecture compare to the proposed architectures?
- How was the proposal distribution $\mathbb{Q}$ chosen?
- Why were $\beta = 0.7$ and $J = 0.4$ specifically chosen? The correlation length seems quite small for this choice (see Fig. 4 right). This might have the consequence that the problem is not hard for this choice of parameters.
- Why is there no comparison on the ising model, with $\mu = 0$, $J = 1$ and $\beta = 0.4407$ at the critical temperature where the correlation length does not decay and theoretical solutions can be used as a baseline?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your thorough and valuable feedback on our manuscript. Below we itemize and address your comments and concerns.
**Further experiments and benchmarks:**
- We have included additional experiments, as possible within the limited time frame. Specifically, we run additional tests of the Ising model at the critical parameters you suggested, and benchmarked this setup against the samplers that reviewer YU3Q suggested (DISCS) and annealed importance sampling (AIS). The results are provided in the anonymous google drive https://tinyurl.com/leapsicml where we see that the LEAPS performs favorably in this setting. To save space to address your other concerns, **please see our response to YU3Q for a thorough discussion of these results.**
**No experiments with locally equiv MLP**
- We have tested the MLP architecture and it has performed significantly worse. The convolutional architecture is geometrically translation equivariant, which is known in deep learning to outperform MLPs if the data (e.g. the statistical physics models we consider) is symmetric. We therefore do not consider this a noticable benchmark.
**Experimental details.** Thank you for letting us know about missing details for replication. Here we supply an exhaustive list:
- Learning rate=5e-3
- Hyperparameter selection: We peform manual optimization of the neural network architectures and hyperparameters.
- Batch size 256 walkers, stored in a replay buffer. New walkers are added to the buffer every 50 training steps
- Training iterations: 30k
- Ising model: periodic lattice.
- External magnetic field: $\mu=0$.
- Comparison to theoretically available solutions: The theoretically available solutions are infinite grid sizes. We run Glauber dynamics that simulate the actual physics of the system for a sufficiently long time to recover the statistics that we compare against.
We will add the above details to the paper. Please let us know if you are missing other details.
**Essential References Not Discussed:**
- Thank you for pointing us to these citations. We will happily add them to the text when we are allowed to edit it. We note that the paper by Tian et al is cited already in multiple locations of the manuscript.
- One distinguishing feature of our approach to autoregressive models is that we can draw samples with fewer than d steps where d is the dimension. For example, we achieved an ESS of 69% in $<100$ steps for the Ising model. An autoregressive could only achieve the same results in $d=L^2$ steps. Therefore, our effective ESS will be higher.
**Non-locally invariant architecture clarification**
- Thank you for bringing this up. As stated in section 7, a non-locally equivariant architecture would require $\mathcal{O}(N*d)$ evaluations of the network for computation for $N$ the number of neighbors and $d$ the dimension. This number is prohibitively large for any $d$ that is interesting for applications, e.g. $d=225$ in the Ising experiment. We do not consider a need to give experiments to show that this is unfeasible. Of course, one could *train* a neural network architecture to be locally equivariant via an additional loss but then lose the computation of *exact* importance weights, which is one of the main goals of our paper.
**Log-variance.**
- We thank the reviewer for bringing up the log-variance loss. Evaluating the log-variance divergence is computationally much more expensive here, which is why we do not benchmark against it. Specifically, the log-variance divergence is $K$ times more expensive in memory, where $K$ is the number of simulation steps. The reason for that is as follows: The log-variance loss is not valid when sampled pointwise (it requires the whole trajectory). In contrast, the PINN loss can be evaluated pointwise. However, the PINN objective generally requires a “discrete divergence” computation (the $\mathcal{K}_t$ operator). We get rid of the cost of this discrete divergence with the locally equivariant networks. Note that this is not necessarily true in the continuous case, where the log-var can avoid the computation of a divergence via Ito-integrals.
**Proposal distribution.**
- We use the model itself as a proposal distribution.
Finally, we like to emphasize that the we consider the the main contribution of this work the introduction of a new paradigm for learning to sampling from discrete distributions. We therefore stress our methdological and theoretical contributions:
- A new derivation of Radon-Nikodym derivatives of path measures
- Proactive importance sampling as an IS scheme for CTMCs
- A PINN-objective for discrete samplers by bounding the variance of the IS weights
- Local equivariance as a symmetric constraint to enable scalable proactive IS
- Locally equivariant neural network architectures such as convolutional neural networks
Thank you again for your valuable review. We hope that this addresses your questions. We would accordingly appreciate any increase in rating you see fit. | Summary: The goal of this paper is to draw samples from a distribution $\rho_1$, known up to a normalization constant, over a discrete space.
One way to do so is to simulate a prescribed path of marginal distributions ${(\rho_t)}_{t \in [0, 1]}$ that ends with the desired target.
To do so, the authors introduce a generic process ${(p_t)}_{t \in [0, 1]}$ driven by a Markov operator $Q_t$,
and then consider the reweighted process $(w_t p_t)_{t \in [0, 1]}$,
where the reweighting is done by the probability distribution $w_t(x) \propto \mathbb{E}_{A_t | X_t = x}[\exp(A_t)]$.
This reweighted sampling process simulates exactly the prescribed path of marginals, if either:
1. the Markov operator $Q_t$ verifies (Eq 7): $\partial_t \rho_t(x) = \sum_{y \in S} Q_t(x, y) \rho_t(y)$
2. the log-unnormalized weights $A_t$ verify (Eq 11): $\partial_t A_t = (1 / \rho_t(X_s)) \big( \partial_t \rho_t(X_s) - \sum_{y \in S} Q_t(X_s, y) \rho_t(y) \big)$
The authors propose to enforce the equation on $Q_t$ by minimizing its violation, which can be written as a PINN loss. In that loss, the authors propose an computationally efficient parameterization of $Q_t$ based on equivariant neural networks.
## update after rebuttal
I appreciate that additional experiments were added, where the authors compare against baseline methods that were lacking in the original submission. Sampling in discrete spaces is a rapidly evolving field with a broad literature, ranging from statistical physics to mainstream machine learning, so finding all the relevant literature is not so obvious. It seems like the authors agreed to cite the references that were rightly brought up by many reviewers. I agree with reviewer 8trc that further clarity would be welcome in the experiments (using other metrics than ESS) and the text (claims about auto-regressive models and the log-variance loss). Yet overall, the paper is already an interesting contribution to the field. So I will maintain my positive score.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: The evaluation is encouraging.
Theoretical Claims: Not in detail but they seem coherent with previous literature.
Experimental Designs Or Analyses: I looked at the reported results which are encouraging. I have a few questions, however:
- **What does "no transport" refer to?**
In Figure 4, what does "no transport" refer to? I read in the caption that it denotes the case "of using annealed dynamics with just the marginal preserving MCMC udpates to show that the transport from Q_t is essential". I don't understand what this corresponds to exactly in the text. Could the authors provide more detail?
- **More datasets**
The experiments on the Ising model, while principled (because the ground truth is known) and encouraging, are limited. Did the authors benchmark against other target distributions, such as quantized MNIST as in [1].
- **LEAPS uncorrected vs. LEAPS**
In Figure 1, the authors contrast LEAPS (perfect transport and reweighting) with uncorrected LEAPS (reweighting only). Which $Q_t$ is used for the uncorrected LEAPS? Because the distinction between LEAPS and uncorrected LEAPS is central to the authors' paper, it would be useful to see both these results in Figure 4 as well.
[1] Discrete Langevin Sampler via Wasserstein Gradient Flow. Haoran Sun, Hanjun Dai, Bo Dai, Haomin Zhou, Dale Schuurmans. 2022.
Supplementary Material: No.
Relation To Broader Scientific Literature: This paper relates to the sampling literature.
Essential References Not Discussed: Related work is discussed.
Other Strengths And Weaknesses: -
Other Comments Or Suggestions: -
Questions For Authors: 1. **Confused about motivation of the equivariant parameterization of the rate matrix**
In section 7, the authors first write an equation involving $Q_t^{\theta}(y^i, i | x)$ and $Q_t^{\theta}(x^i, i | y)$. If I correctly understand, each of these terms should be evaluated for each neighbor $y$, so roughly 2 |N(x)| times.
After enforcing equivariance, the equation now only involves $F_t^{\theta}(y^i, i | x)$. As I understand it, this term should also be evaluated for each neighbor $y$ so |N(x)| times. What am I missing?
2. **Connection with NETS**
The authors mention that their paper can be understood as an extension of [1]. The comparison is
| NETS | LEAPS |
| -------- | ------- |
| ALD | CTMC |
| ALD + perfect additional transport (Eq 16) | CTMC + perfect rate matrix |
| ALD + reweighting (Eqs 10, 11) | CTMC + reweighting |
One difference is that the vanilla ALD defines a specific, tractable Markov kernel, that does not require learning and simulates *approximately* the prescribed path.
In contrast, CTMC is defined by a general Markov kernel that does not necessarily simulate approximately the prescribed path.
Do the authors have an idea of how to choose a CTMC that would be a closer comparison to ALD? Specifically, do they have an idea of what would be a default, tractable choice of $Q_t$, that does not require learning and simulates *approximately* the prescribed path?
3. **Question about the PINN loss**
The PINN loss in Proposition 6.1. encourages $K_s \rho_s = \partial_s F_s$. Shouldn't it encourage $K_s \rho_s = \partial_s A_s$ from Eq 11.
4. **Interpolation schemes**
The authors define a general interpolation in Eq 2. In Figure 4, the authors use an interpolation that is specific to the Ising model. In Figure 1, which interpolation do the authors use? In the text after equation 3, the authors say that when the initial distribution is uniform, then we get $\rho_t \propto exp(-t U_1(x))$ but this is the case when the interpolation is a geometric mean. In discrete diffusion models, the interpolation is an arithmetic mean between the start and end distributions. Can the authors clarify which interpolations they use? Also, have the authors tried different interpolations or have any thoughts on how that choice affects the learning of $Q_t$?
[1] NETS: A Non-Equilibrium Transport Sampler. Michael S. Albergo, Eric Vanden-Eijnden. 2024.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback on our work. Below, we provide answers to your questions and concerns.
**Additional benchmarks and experiments.** We obtained new experimental results. The results can be found under this anonymous link, in which we show that our method performs well at the critical phase of the Ising model and compares favorably against existing samplers (see more below): https://tinyurl.com/leapsicml
We use two benchmarks:
1. **LEAPS vs. MCMC.** We benchmark LEAPS against a diversity of MCMC samplers on the critical temperature of the Ising model. The results can be found in figure 1 in the google drive folder. As one can see, LEAPS achieves an effective sample size (ESS) of $\sim 70$% vs ESS$<0.1$% of MCMC-based samplers.
2. **LEAPS vs. AIS:** We also benchmark LEAPS against annealed importance sampling (AIS), see figure 3. As one can see, even with $100k$ simulation steps, AIS only achieves an ESS of $<30$% vs an ESS of $69$% of LEAPS with 100 simulation steps.
3.
Please see our response to YU3Q for a thorough discussion of these results.
**Questions.** We address your remaining questions and comments in the following:
- ```What does "no transport" refer to?```: This refers to not using the neural network at all. We show via this ablation that it is truly the neural network (and not the AIS) that enables the sampling from the distribution with only a few samples. We ran additional benchmarks on this that we explained above (AIS vs LEAPS) not using AIS at all for LEAPS.
- ``LEAPS uncorrected vs. LEAPS``: For the simulation of the continuous-time Markov chain, we always use the rate matrix $Q_t$ represented by our network. Due to the local equivariance, we get importance weights for free. "LEAPS uncorrected" means that we do not use the importance weights (at perfect training, this would not be necessary). "LEAPS corrected" means that we use the importance weights to reweight samples towards the correct distribution. Both parts are essentials as our ablations show.
- ```More datasets and benchmarks:``` We provided other benchmarks, in particular against a series of MCMC samplers. We further actively run experiments on the Potts model, another model from statistical physics. The only other established benchmark for discrete sampling is combinatorial optimization. However, we note that metrics used for combinatorial optimization do not aim to faithfully sample from a distribution or any observable thereof, where importance weights could be used. Rather it tries to find low energy states. This puts our method fundamentally at a disadvantage.
- ```Clarifying equivariance```: We note that for an input $x$ the neural network outputs $F_t^\theta(y^i,i|x)$ for every $y^i,i$ for token $y^i$ and index $i$ (this is also true for discrete diffusion models, it returns a rate per neighbor). Therefore, this requires only one forward pass. Thank you for bringing it up that this needs further clarification. We will highlight the distinction between input and output of the neural network more in future versions.
- ```Connection with NETS/ALD```: Here is the analogy. Because ALD is like running Langevin dynamics locally on some $\rho_t \sim e^{-U_t}$ for fixed $t$ and then iterating $t \rightarrow t + dt$, so too can we add any local MCMC kernel to the dynamics of the CTMC that preserves detailed balance. This would "simulate approximately the prescribed path" just like Langevin does in the continuous setting. Does that make sense? We will add a remark about this in the text.
- ```Question about the PINN loss```: The goal of LEAPS (and of all importance sampling schemes) is that the weights have as low variance as possible. In our case, this would mean that $A_t$ is just a constant in time and space. In other words, this means $\partial_sA_s=0$. As we show in the paper, this is equivalent to $\mathcal{K}_s\rho_s=\partial_sF_s$. Thank you for bringing this up. We will highlight this in future versions of our work.
- ```Interpolation schemes```: The interpolation scheme is given by a time-dependent Ising model for coupling constant $J_t=tJ_1$ where here $J_1=1$. As you point out, this is equivalent to temperature annealing, i.e. $\rho_t(x)\propto \exp(-tU_1(x))$. As also pointed out, discrete diffusion models use arithmetic means (or probabilistic mixtures) between a uniform (or mask) and the target distribution. In principle, one could use such an interpolation. We focuses on temperature annealing because it is common in the sampling literature and it allows to sample from the Ising model for higher temperatures on the fly (by simply stopping at earlier time points). Therefore, this is a natural and physical way. We have explored various annealing speeds and schedules (i.e. time parameterizations) for the annealing that is happening during training and found the schedule of $t\to \sqrt{t}$ to perform best. | Summary: The paper proposes locally equivariant functions, a compact neural parameterization of rate matrices in continuous-time Markov processes over discrete state spaces. This effectively allows them to use recent "discrete diffusion" models as proposals in annealed importance sampling and sequential Monte Carlo over discrete spaces. They train these via a variational upper bound on the log-variance divergence. The experiment with a 2D Ising model that has a computable ground truth shows an improvement in performance using the resulting proposals over proposals designed without any measure-transport or diffusion machinery.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Proofs are in the appendix and were not checked, but the theorems and propositions themselves do look sensible to me.
Experimental Designs Or Analyses: There was only one experiment, and its design looks fine. In the course of reviewer discussion, the authors have added additional benchmarks and experiments, which appear quite thorough.
Supplementary Material: No
Relation To Broader Scientific Literature: The paper contextualizes itself adequately within the literature.
Essential References Not Discussed: I could not tell the authors if they were missing a discrete diffusion paper.
Other Strengths And Weaknesses: The paper's main weakness is that it gives exactly one experiment in which the only baseline against something other than a locally-equivariant sampler is "no transport". Could methods from other works cited not be compared to the present method?
After review the paper has become significantly stronger through the addition of more experiments.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback on our work. Below we address your questions and provide new information on additional experiments.
**Updates to experiments: harder sampling problem and comparison benchmarks**
We obtained new experimental results and benchmarks as possible within the limited time frame. The results can be found at this anonymous link, in which we show that our method performs well at the critical point of the Ising model and compare it extensively against other methods in this hard regime (see more below): https://tinyurl.com/leapsicml
**Additional benchmarks (1) - LEAPS vs. MCMC.** We use the DISCS benchmark [1] for discrete samplers to compare LEAPS to various MCMC samplers. We also adapt the parameters of the Ising model to the critical temperature for $\beta=0.4407$. Note that this makes the problem harder (both for traditional methods and for LEAPS). The results can be found in figure 1 in the google drive folder. As one can see, LEAPS achieves an effective sample size (ESS) of $\sim 70$% vs an ESS of $<0.1$% of MCMC-based samplers. This shows how LEAPS effectively converts a simple distribution into a highly complex distribution that can only be sampled with many MCMC runs. In figure 3, we also show that the samples recover known physical observables of the Ising model. Finally, we note that there are significant difficulties in creating a fair assessment between non-equilibrium (LEAPS) and equilibrium methods (MCMC):
- The ESS is measured differently in DISCS because it focuses on equilibrium MCMC methods, rather than non-equilibrium sampling methods such as LEAPS.
- The ESS quantities for non-equilibrium methods such as LEAPS do not take into account the number of steps used for annealing (i.e. for simulation of the continuous-time Markov chain). To account for this, we have also plotted the same quantities normalized by the number of function evaluations in Figure 1 (ESS/NFEs). However, LEAPS does *not* require evaluations of the energy at inference time but only of the neural network, while MCMC methods require energy evaluations (see metrics used in DISCS [1]). The best way we account for that now would be to normalize by the NFEs as measured in neural net evaluations. However, note that this does not measure efficiency or wall clock time, which would highly dependent on implementation and is usually not measured in the literature on neural sampling methods. The main goal of LEAPS is to solve the sampling problem, not efficiency.
We hope that our new experiments showcase the efficacy of the LEAPS algorithm, while also explaining the difficulty in benchmarking LEAPS vs traditional equilibrium methods.
**Additional benchmarks (2) - LEAPS vs. AIS.** We also benchmark LEAPS against annealed importance sampling (AIS) [2]. AIS is a standard non-equilibrium sampling technique. Note that here, ESS is measured in the same way and the two methods are exactly comparable. In Figure 2, we plot the number of AIS steps vs the ESS of AIS. As one can see, even with $100k$ simulation steps, AIS only achieves an ESS of $<30$% vs an ESS of $69$% of LEAPS with 100 simulation steps. We note that LEAPS is a true extension of AIS and we can run AIS with the learned transport without changing the weights. In addition it can be turned into a sequential monte carlo sampler by performing resampling along the trajectory. However, here, for the purposes of benchmarking we turn off the AIS (here and above) to showcase that the learnt transport via the neural network enables the efficiency boost.
**Comparison to other works cited:** Many of the works that we cite in our work are about the mathematical realization of discrete diffusion models and sampling algorithms for continuous variables. To the best of our knowledge, we do not know of another paper which learns a neural sampler for discrete distributions via Continuous Time Markov Chain. Therefore, we cannot benchmark against them. If you know of any, please let us know.
Finally, we would like to emphasize that the we consider the the main contribution of this work the introduction of a new paradigm for learning to sampling from discrete distributions. We therefore emphasize again our methdological and theoretical contributions:
- A new derivation of Radon-Nikodym derivations of path measures
- Proactive importance sampling as an IS scheme for CTMCs
- A PINN-objective for discrete samplers by bounding the variance of the IS weights
- Local equivariance as a symmetric constraint to enable scalable proactive IS without losing representational power
- Locally equivariant neural network architectures such as LE-convolutional neural networks
Thanks again for your valuable feedback. If these results have fulfilled your questions and concerns, we would greatly appreciate any increase in score.
[1]. Goshvadi K, Sun H, Liu X, et al. DISCS: a benchmark for
discrete sampling[J]. Advances in Neural Information Processing Systems, 2023, 36: 79035-79066. | Summary: The authors propose an algorithm for sampling from discrete distributions by combining importance sampling with a learned continuous-time Markov chain (CTMC). They derive the importance weights via a Radon–Nikodym derivative for CTMCs and introduce locally equivariant neural architectures to ensure tractable learning of the rate matrices. Empirical results on a 2D Ising model demonstrate the sampling efficiency of the algorithm.
Claims And Evidence: Most of the theoretical claims in the paper are supported by clear proofs or sound intuitions. However, while the empirical results on the 2D Ising model are promising, further evidence on additional benchmarks or experiments could strengthen the overall support for the method’s general applicability. (Please see "Experimental Designs Or Analyses" part)
Methods And Evaluation Criteria: The proposed methods are well-suited to the problem at hand. The derivation of importance sampling weights via the Radon–Nikodym derivative is both theoretically sound and intuitively reasonable, and the parametrization of the rate matrix aligns with common practices in discrete Markov models. Moreover, the use of effective sample size (ESS) as the evaluation criterion in the 2D Ising model experiment is standard for assessing sampling efficiency, providing a clear metric for evaluating the algorithm’s performance.
Theoretical Claims: I checked the proof of Proposition 5.1 and Theorem 5.2 roughly. Both proofs appear to be logically structured and correct.
Experimental Designs Or Analyses: There are two main concerns regarding the experiment part of this paper:
1. Insufficient benchmarks: Considering the many effective discrete sampling algorithms proposed in recent years, the paper would be more convincing if the authors compared their method against these state-of-the-art techniques. For example, they could refer to [1], which presents a set of advanced discrete samplers and provides useful comparisons between these samplers.
2. Lack of more experiments: The evaluation is limited to the Ising model. To more comprehensively demonstrate the algorithm’s effectiveness, the authors should consider additional experiments on other tasks, such as other classical graphical models, combinatorial optimization problems, or generative tasks. The settings and benchmarks in [1] could serve as a useful guide for expanding the experimental section.
References:
[1]. Goshvadi K, Sun H, Liu X, et al. DISCS: a benchmark for discrete sampling[J]. Advances in Neural Information Processing Systems, 2023, 36: 79035-79066.
Supplementary Material: There's no supplementary material. I would recommend the authors provide codes for the experiments.
Relation To Broader Scientific Literature: The paper provides its contributions to discrete sampling problems. In particular, it builds on prior advances in annealed importance sampling, sequential Monte Carlo methods, and recent discrete diffusion models that employ CTMCs. The introduction of locally equivariant neural architectures for parameterizing the rate matrix extends existing ideas from neural parameterizations in discrete Markov models.
Essential References Not Discussed: It would be a good idea if the authors could discuss the family of newly proposed discrete sampling methods mentioned in [1], like Locally Balanced [2], Gibbs with Gradients [3], Path Auxiliary
Sampler [4], Discrete Metropolis Adjusted Langevin Algorithm [5],
Discrete Langevin Monte Carlo [6].
References:
[1]. Goshvadi K, Sun H, Liu X, et al. DISCS: a benchmark for discrete sampling[J]. Advances in Neural Information Processing Systems, 2023, 36: 79035-79066.
[2]. Zanella G. Informed proposals for local MCMC in discrete spaces[J]. Journal of the American Statistical Association, 2020, 115(530): 852-865.
[3]. Grathwohl W, Swersky K, Hashemi M, et al. Oops i took a gradient: Scalable sampling for discrete distributions[C]//International Conference on Machine Learning. PMLR, 2021: 3831-3841.
[4]. Sun H, Dai H, Xia W, et al. Path auxiliary proposal for MCMC in discrete space[C]//International Conference on Learning Representations. 2021.
[5]. Zhang R, Liu X, Liu Q. A Langevin-like sampler for discrete distributions[C]//International Conference on Machine Learning. PMLR, 2022: 26375-26396.
[6]. Sun H, Dai H, Dai B, et al. Discrete langevin samplers via wasserstein gradient flow[C]//International Conference on Artificial Intelligence and Statistics. PMLR, 2023: 6290-6313.
Other Strengths And Weaknesses: Strengths:
1. The paper is highly motivated, contributing to the challenging and important topic of discrete sampling, which has significant implications in statistics and machine learning.
2. The derivation of the proposed algorithm is intuitive and theoretically sound, and the approach is novel to me.
Weaknesses:
My main concern lies with the experimental section. Expanding the benchmarks and providing more extensive empirical comparisons, as discussed in the “Experimental Designs or Analyses” section, would strengthen the paper. With improvements to the experimental evaluation, the paper would make a stronger contribution.
Other Comments Or Suggestions: There are several typos and unclear notations that need to be corrected:
1. **Line 110 (right):**
"As we do not the normalization constant" should be "As we do not know the normalization constant."
2. **Equation 12 (second line):**
It appears that $\mathbf{Y}$ might be a typo;
3. **Proposition 6.1:**
The sampling notation should likely be $s \sim \mathrm{Unif}[0,t]$ instead of $s \sim \mathrm{Unif}[0,1]$.
4. **Equation 16 (second line):**
The notation $y^j$ should be corrected to $y_j$.
Questions For Authors: Please see "Experimental Designs Or Analyses" and "Other Comments Or Suggestions"
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback on our work. We are glad to read that the reviewer considers our work a "sound" and "novel" contribution. Below, we provide answers to your questions and concerns.
We understand that the reviewer sees our experiments as the main area of improvement. Addressing this, we obtained new experimental results - as possible within the limited time frame. The results can be found under this anonymous link, in which we show that our method performs well at the critical phase of the Ising model and compares favorably against existing samplers (see more below): https://tinyurl.com/leapsicml
We would also like to emphasize that the we consider the main contribution of this work the introduction of a new paradigm for learning to sampling from discrete distributions. We therefore stress our methodological and theoretical contributions:
- A new derivation of Radon-Nikodym derivatives of path measures
- Proactive importance sampling as an IS scheme for CTMCs
- A PINN-objective for discrete samplers by bounding the variance of the IS weights
- Local equivariance as a symmetric constraint to enable scalable proactive IS
- Locally equivariant neural network architectures such as convolutional neural networks
**Additional benchmarks (1) - LEAPS vs. MCMC.** Thank you for sharing the DISCS benchmark [1] for discrete samplers with us. To use this benchmark, we adapt the parameters of the Ising model to match theirs, i.e. we use the parameters for the critical temperature for $\beta=0.4407$. Note that this makes the problem harder (both for traditional methods and for LEAPS). We then ran benchmarks of LEAPS vs other discrete samplers (including the ones in the DISCS benchmark). The results can be found in figure 1 in the google drive folder. As one can see, LEAPS achieves an effective sample size (ESS) of $\sim 70$% vs ESS$<0.1$% of MCMC-based samplers. This shows how LEAPS effectively converts a simple distribution into a highly complex distribution that can only be sampled with many MCMC runs. In figure 3, we also show that the samples recover known physical observables of the Ising model. Finally, we note that there are significant difficulties in creating a fair assessment between non-equilibrium (LEAPS) and equilibrium methods (MCMC):
- The ESS is measured differently in DISCS because it focuses on equilibrium MCMC methods, rather than non-equilibrium sampling methods such as LEAPS.
- The ESS quantities for non-equilibrium methods such as LEAPS do not take into account the number of steps used for annealing (i.e. for simulation of the continuous-time Markov chain). To account for this, we also plot the same quantities normalized by the number of function evaluations in Figure 1 (ESS/NFEs). However, LEAPS does *not* require evaluations of the energy at inference time but only of the neural network, while MCMC methods require energy evaluations (see metrics used in DISCS [1]). The best way we account for that now would be to normalize by the NFEs as measured in neural net evaluations. However, note that this does not measure efficiency or wall clock time. The main goal of LEAPS is to solve the sampling problem, not efficiency.
We hope that our new experiments showcase the efficacy of the LEAPS algorithm, while also explaining the difficulty in benchmarking.
**Additional benchmarks (2) - LEAPS vs. AIS.** We also benchmark LEAPS against annealed importance sampling (AIS). AIS is a standard non-equilibrium sampling technique. Note that here, ESS is measured in the same way. In Figure 2, we plot the number of AIS steps vs the ESS of AIS. As one can see, even with $100k$ simulation steps, AIS only achieves an ESS of $<30$% vs an ESS of $69$% of LEAPS with 100 simulation steps. We note that LEAPS is a true extension of AIS (as well as SMC) and we can run AIS with the learned transport without changing the weights. However, for the purposes of benchmarking we turn off the AIS (here and above) to show that the learnt transport via the neural network is what enables the performance boost.
**Other benchmarks.** Thank you for bringing up other possible benchmarks. We note that combinatorial optimization does not aim to faithfully recover a distribution or any observable thereof, where importance weights could be used. This puts our method fundamentally at a disadvantage. Please let us know if there are other benchmarks that come to mind that we can run to convince you of the potential of the method.
**References.** Thank you for highlighting the references that we included in the updated draft of our work. We have benchmarked against the suggested methods (see above).
We thank the reviewer again for the insightful response. If these results have addressed your concerns, we would greatly appreciate any increase in score.
[1]. Goshvadi K, Sun H, Liu X, et al. DISCS: a benchmark for
discrete sampling[J]. Advances in Neural Information Processing Systems, 2023, 36: 79035-79066. | null | null | null | null | null | null |
Reducing Variance of Stochastic Optimization for Approximating Nash Equilibria in Normal-Form Games | Accept (spotlight poster) | Summary: This paper proposes NAL, a loss function that is unbiased and has lower variance compared with the only unbiased loss function proposed in Gemp et al. (2024). The paper conducts theoretical and empirical justifications to show that NAL theoretically and empirically exhibits lower variance and thus accelerates convergence.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: All proofs seem to be correct.
Experimental Designs Or Analyses: Yes. The experiments in Section 5 well demonstrates the advantages of NAL compared with existing loss functions.
Supplementary Material: I skimmed through the appendix.
Relation To Broader Scientific Literature: The paper builds on the literature of equilibrium computation in normal-form games. Gemp et al., (2022) and Duan et al., (2023) propose biased loss functions, and Gemp et al. (2024) proposes an unbiased loss function with large variance. This paper contributes to the literature by proposing an unbiased loss function with lower variance. Besides, this paper also contributes to the general machine learning literature where unbiased estimators are key components for convergence of many first-order algorithms.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
S1: The presentation is easy-to-follow and well-written.
S2: The observation of the unnecessary large variance in the loss function proposed by Gemp et al. (2024) is insightful.
S3: Both theoretical and empirical results about NAL are strong.
Weaknesses:
W1: The contribution of lower variance builds upon the idea of Gemp et al. (2024), which may slightly decrease the paper's originality.
W2: The theoretical justification of variance difference between NAL and loss in Gemp et al. (2024) is not rigorous, mainly for following reasons: 1) The expression derived between LHS of line 243-248 is an upper bound of the variance. Actually, to show the variance of Gemp et al. (2024) suffers from a $\sigma \max |\mathcal{A}_i|$ scaling, there should be an example demonstrating the $\Omega(|\mathcal{N}|\sigma^2 \max |\mathcal{A}_i|)$ lower bound, or show that the inequality between LHS of line 255-line 256 is tight in some sense. 2) The lower variance of the gradient is not the only evidence for faster convergence. The norm of the gradient should also be taken into consideration. For example, if you scale the NAL loss with a factor $\sigma \max |\mathcal{A}_i|$, then there seems to be no variance advantages for NAL.
Other Comments Or Suggestions: Comments:
C1: It seems that between RHS of line 380-383, the statement should be reverse (alternate Blotto and Liars Dice).
C2: I found the game size configurations in the caption of Figure 1. I think it should better appear in the main text, since the large game description is one of the motivation to estimate the loss.
Questions For Authors: Questions:
Q1: In RHS of line 327, the authors mentioned that $\epsilon = 1$ in experiments. It means that $\hat{x}_i$ in NAL is always chosen to be the uniform strategy. Can authors provide more intuitions behind this practice?
Q2: In RHS of line 308-316, the authors use a DNN with constant input to represent a strategy. Why do not the authors choose to represent the strategy with real vectors directly? (with post-processed softmax activation)
Q3: How were the duality-gap and exact loss evaluated in experiments? Through brute-force computations or other cleverer approaches? It seems that the game size is not small and directly computing the duality gap and loss functions require costly computations.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive assessment and helpful suggestions.
**W1: The contribution of lower variance builds upon the idea of Gemp et al. (2024), which may slightly decrease the paper's originality.**
**A:** Both our work and Gemp et al. (2024) explore leveraging the stochastic optimization techniques in ML for NE computation. However, our work is motivated by a distinct problem compared to Gemp et al. (2024). Gemp et al. (2024) resolve whether NE can be learned via these techniques in ML, while we address how efficiently NE can be learned through these techniques.
Specifically, Gemp et al. (2024) mainly focus on removing bias introduced by previous loss functions, which could hinder convergence to NE when employing the stochastic optimization techniques in ML. In contrast, our work tackles the high variance issue within the loss function proposed by Gemp et al. (2024), as excessive variance can significantly slow down convergence when learning NE via the stochastic optimization techniques in ML. As you kindly pointed out, "the observation of the unnecessary large variance in the loss function proposed by Gemp et al. (2024) is insightful," we are the first to identify this high-variance issue and to propose a solution to it.
---
**W2.1: The absence of a lower bound of the variance.**
**A**: Now, we present that the variance in estimating ${L}^{\tau}\_{G}(x)$ is at least $\sigma \min\_{i \in N}|A\_i|$ times greater than that of NAL.
Assume $Var[\bar{g}^{\tau,x,j}\_i(a\_i)] = \sigma$ (defined in Section 4.2, and $j \in \{ 1, 2\}$). Applying the derivations in Appendix D (where every "$\leq$" can be replaced with "$=$") to Section 4.2, for the loss in Gemp et al. (2024), we have
$$
Var[L^{\tau}\_{G}(x)] = \sum\_{i \in N} \sum\_{a\_i \in A\_i} Var[\bar{g}^{\tau,x,1}\_i(a\_i)\bar{g}^{\tau,x,2}\_i(a\_i)] \geq \sum\_{i \in N} \sum\_{a\_i \in A\_i} \sigma^2 \geq \sigma^2 |N|\min\_{i \in N}|A\_i|.
$$
Similarly, assuming $Var[\hat{g}^{\tau,x}\_i(a\_i)] = \sigma$ (defined in Section 4.2), we have
$$
Var[L^{\tau}\_{NAL}(x)] =
\sum\_{i \in N} \sum\_{a\_i \in A\_i} (x\_i(a\_i))^2 Var[\hat{g}^{\tau,x}\_i(a\_i)] = \sum\_{i \in N} \sum\_{a\_i \in A\_i} (x\_i(a\_i))^2 \sigma \leq \sigma|N|,
$$
where the last inequality comes from that $\sum\_{a\_i \in A\_i} (x\_i(a\_i))^2 \leq 1$. Clearly, the variance in estimating ${L}^{\tau}\_{G}(x)$ is at least $\sigma \min\_{i \in N}|A\_i|$ times greater than that of NAL, since intuitively, $\sigma$ increases with the size of the game and can grow significantly larger than 1.
We will include this result in the revised version.
---
**W2.2: The norm of the gradient should also be considered.**
**A**: We sincerely appreciate your insightful suggestion. It is an important aspect that was not considered in our paper. However, since studying the norm of the gradient involves substantial theoretical and empirical investigation, and this paper primarily focuses on variance reduction, we leave a thorough exploration of the gradient norm to future work. Thank you for your thoughtful feedback and for helping to inform the direction of our ongoing research.
---
**Q1: Why use $\epsilon=1$.**
**A:** The reason is that $F^{\tau, x}_i - \overline{F^{\tau, x}_i}$ (used in Gemp et al. (2024)) and $F^{\tau, x}_i - \langle F^{\tau, x}_i , \hat{x}_i \rangle \mathbf{1}$ (used in NAL) is only equivalent when $\epsilon=1$ in Algorithm 1. This choice ensures a fair comparison between NAL and the loss in Gemp et al. (2024), as it mitigates the influence of the selection of $\hat{x}\_i$. To strengthen the robustness of our results, we include experiments with various $\epsilon$ values ($0$, $0.1$, $0.5$, and $0.9$), as shown in Figures 1–4 of https://anonymous.4open.science/api/repo/ICML-2025-ID-10862-Rebuttal/file/additional-experimental-results.pdf. Across all tested $\epsilon$ values, our algorithm consistently outperforms the others.
---
**Q2: Why use a DNN rather than a real vector for strategy representation?**
**A:** We employ a DNN due to its capability to approximate arbitrary non-linear functions, enabling the discovery of complex equilibrium strategies that simpler representations may overlook (see Appendix A for a detailed discussion on the advantages of DNNs). In contrast, a real vector lacks this expressive power. The results, where the strategy is represented using a real vector, are shown in Figure 5 of https://anonymous.4open.science/api/repo/ICML-2025-ID-10862-Rebuttal/file/additional-experimental-results.pdf. All algorithms exhibit varying degrees of performance degradation, yet our algorithm still outperforms the others.
---
**Q3: How were the duality-gap and exact loss evaluated in experiments? Through brute-force computations or other cleverer approaches?**
**A:** Unfortunately, we rely on brute-force computation. This computation is used solely to assess the performance of the tested algorithms and are not involved in these algorithms' convergence process.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. All my concerns are resolved, and I will maintain my positive evaluation to this submission. | Summary: This paper studies computing Nash equilibria (NE) in normal-form games using non-convex stochastic optimization techniques from machine learning. The existing unbiased loss function for approximating NE has high variance, which degrades the convergence rate. To address this, the authors propose a novel surrogate loss function named Nash Advantage Loss (NAL). NAL is theoretically proven to be unbiased and has a significantly lower variance than the existing unbiased loss function. Experimental results on eight normal-form games show that the algorithm minimizing NAL converges much faster and has a lower variance in estimated loss values compared to other algorithms.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem.
Theoretical Claims: As far as I checked, the proofs are correct.
Experimental Designs Or Analyses: The experimental design and analysis sound.
Supplementary Material: Not fully reviewed.
Relation To Broader Scientific Literature: Approximating NE of normal-form games is a well-studied area. The main contribution of this paper is a novel surrogate loss function, which is theoretically proven to be unbiased and has a significantly lower variance than the existing unbiased loss function. As the author pointed out, this result show that the variance of the loss function may be one of the key issues influence the convergence rate for approximating NE.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and thoughtful review of our manuscript. Your positive feedback are truly encouraging our work. This paper focuses on leveraging the stochastic optimization techniques in ML to approximate NE of normal-form games. We propose a novel surrogate loss function, which has a significantly lower variance than the existing unbiased loss function. This reduction in variance accelerates the convergence rate for approximating NE. We hope our work can inspire more researchers in the community to engage with this emerging line of leveraging the stochastic optimization techniques in ML for NE computation. | Summary: The authors tackle the problem of NE computation in general-sum multiplayer NFGs, which is known to be a computational hard problem. They build on the work of Gemp et al. (2024) to come up with a novel approach involving a surrogate loss function they call Nash Advantage Loss. They show that NAL is unbiased and that its variance is lower compared to other surrogate loss functions. They provide empirical evidence to support these claims.
Post-rebuttal
While the authors' response was helpful, I do think that it would be valuable to see the exposition regarding the stop-gradient incorporated into the paper, in particular, a precise mathematical definition. While the references are helpful, given that it is a key notion in the paper, it should be defined at the beginning of the paper before the approach is explained in more detail. I will maintain my score.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I checked the proofs, although, it was unclear to me what the formal definition of the stop-gradient operator is; since the paper is about the novel loss function, and the analysis of the loss function hinges on the stop-gradient operator, it seems as though more exposition should be allocated to the stop-gradient operator.
Experimental Designs Or Analyses: Yes, the experimental design is sound.
Supplementary Material: Yes, I reviewed all of the appendices.
Relation To Broader Scientific Literature: The authors have done a good job of contextualizing their work with respect to the broader literature, particular in appendices A and B.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: I have already commented on my confusion regarding the stop-gradient operator. The empirical results seem to suggest that the method is successful at reducing variance.
Other Comments Or Suggestions: 1. In the Related Work section, it is a bit odd to refer to $\hat{\mathbf{x}}_i$ when you have not introduced any notation yet; it just leads to more confusion for the reader (especially, when you say your "algorithm does not support $\hat{\mathbf{x}}_i = 0$"; these statements would be more appropriately placed after notation has been introduced by your paper, and in the Related Work section, it would make sense to keep things more high-level without introducing notation.
Questions For Authors: 1. Can the authors explain how the stop-gradient enables variance reduction in their stochastic optimization framework?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your valuable and insightful comments.
**Q1: The use of undefined symbols in the Related Work section.**
**A:** We fully agree with your observation. The use of undefined symbols compromises the internal consistency of the paper. We will revise our paper to ensure that all symbols are properly introduced and clearly defined before they are used.
---
**Q2: The definition of the stop-gradient operator.**
**A:** We apologize for the confusion, the description of the stop-gradient operator in our paper (lines 175–178 and 767–770) is too brief. Below, we now provide a more detailed introduction to the stop-gradient operator.
Let $b \in \mathbb{R}^n$ be a variable. The stop-gradient operator is defined as $sg\[b\] = b \in \mathbb{R}^n$ with $\nabla\_b sg\[b\] = 0 \in \mathbb{R}^{n \times n}$. This implies that $sg\[b\]$ passes the value of $b$ unchanged in the forward pass, but blocks its gradient during backpropagation. Intuitively, $sg[\cdot]$ can be regarded as a constant during differentiation. In summary,
- **Forward pass:** $sg\[b\]$ returns the value of $b$.
- **Backward pass:** The gradient is blocked—no gradients are propagated through $b$.
We will provide a detailed definition of the stop-gradient operator in a future revision. In fact, this operator has already been widely adopted in previous works [1,2,3]. We are inspired by the use of this operator in previous works and adopt it in our work accordingly.
References:
1. Grill, Jean-Bastien, et al. "Bootstrap your own latent: a new approach to self-supervised learning." *Advances in neural information processing systems* (NeurIPS). 2020.
2. Flennerhag, Sebastian, et al. "Meta-Learning with Warped Gradient Descent." *International Conference on Learning Representations* (ICLR). 2020.
3. Chen, Xinlei, and Kaiming He. "Exploring simple siamese representation learning." *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition* (CVPR). 2021.
---
**Q3: How the stop-gradient enables variance reduction?**
**A:** Your observation is highly perceptive. The stop-gradient operator is crucial for variance reduction in our stochastic optimization framework. Specifically, it ensures that the variance of our NAL is determined solely by a single estimated variable. In contrast, the loss proposed by Gemp et al. (2024) involves the inner product of two independently estimated variables, leading to significantly higher variance. A more detailed explanation is provided below.
The core idea behind our NAL is that the stochastic optimization techniques in ML rely solely on unbiased estimates of the first-order gradient, rather than the loss function itself. Leveraging this, we define NAL using the stop-gradient operator as $\langle sg\[b\], x\_i \rangle$, where the backpropagated gradient is simply $sg\[b\]$. Here, $b$ is an estimate of the first-order gradient, defined as ${F}^{\tau, x}\_i - \langle {F}^{\tau, x}\_i , \hat{x}\_i \rangle \mathbf{1}$, with $\hat{x}\_i$ being any strategy. Formally, for each $a\_i \in A\_i$,
$ \quad \nabla\_{x\_i(a\_i)} \langle sg\[b\], x\_i \rangle $
$= \nabla\_{x\_i(a\_i)} \sum\_{a'\_i \in A\_i} sg\[b\](a'\_i) x\_i(a'\_i) $
$= \sum\_{a'\_i \in A\_i} \nabla\_{x\_i(a\_i)} (sg\[b\](a'\_i) x\_i(a'\_i)) $
$= \sum\_{a'\_i \in A\_i} \left( \nabla\_{x\_i(a\_i)} sg\[b\](a'\_i) \right) x\_i(a'\_i) + sg\[b\](a'\_i) \nabla\_{x\_i(a\_i)} x\_i(a'\_i) ) $
$= \sum\_{a'\_i \in A\_i} ( \nabla\_b sg\[b\](a'\_i) \nabla\_{x\_i(a\_i)} b ) x\_i(a'\_i) + sg\[b\](a\_i) $
$= sg\[b\](a\_i),$
where the last equality follows from $\nabla\_b sg\[b\] = 0 \in \mathbb{R}^{n \times n}$ (considering $sg\[b\]$ as a constant during the differentiation process makes this process clearer and more understandable).
In contrast, Gemp et al. (2024) define a loss based on the inner product $\langle b', b'' \rangle$, where $b'$ and $b''$ are independent estimates of ${F}^{\tau, x}\_i - \overline{{F}^{\tau, x}\_i}$. Notably, ${F}^{\tau, x}\_i - \overline{{F}^{\tau, x}\_i}$ and ${F}^{\tau, x}\_i - \langle {F}^{\tau, x}\_i , \hat{x}\_i \rangle \mathbf{1}$ is equivalent when $\hat{x}\_i$ is set to the uniform strategy, which is the setup used in our experiments. As analyzed in Section 4.2, the variance of our NAL estimate scales **linearly** with the variance of $b$, while the variance of Gemp et al.’s loss scales **quadratically** with the variance of $b'$ (and $b''$). | Summary: This paper addresses the challenge of efficiently computing Nash equilibria (NE) in normal-form games (NFGs) via non-convex optimization. Prior work, notably by Gemp et al. (2024), proposed an unbiased loss function for this purpose but encountered high variance, which hindered convergence. To overcome this, the authors introduce a novel surrogate loss function - the Nash Advantage Loss (NAL) - which remains unbiased and exhibits significantly lower variance. Empirical results indicate that minimizing NAL yields much faster convergence rates than existing methods. Extensive experiments conducted on eight different NFGs validate the efficacy of the proposed approach, including comparisons across various optimizers and network structures.
Claims And Evidence: Yes (see the strengths discussion below).
Methods And Evaluation Criteria: Yes (see the strengths discussion below).
Theoretical Claims: Although I did not check the details of the proofs (given my limited familiarity with this literature), the technical statements appear well-founded.
Experimental Designs Or Analyses: Yes (see the strengths discussion below).
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper proposes a novel loss function to efficiently compute NE in NFGs.
Essential References Not Discussed: --
Other Strengths And Weaknesses: Strengths:
- The introduction of the Nash Advantage Loss (NAL) is a notable contribution. Its key properties - being unbiased and having reduced variance - directly address the limitations of previous approaches, thereby enhancing the efficiency of NE computation.
- The theoretical sections are presented rigorously. Although I did not check the details of the proofs (given my limited familiarity with this literature), the technical statements appear well-founded.
- The paper provides a systematic empirical evaluation of the proposed method against multiple baselines. The experiments not only compare convergence rates and variance of the loss estimates but also explore different optimizers and network structures, consistently demonstrating better performance of the proposed approach.
Weaknesses:
- The technical sections are dense, and navigating through the various notations and definitions can be challenging. Including a comprehensive table of notations and definitions in the appendix would greatly aid readers.
Other Comments Or Suggestions: --
Questions For Authors: --
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your recognition of our work. In response to your suggestion regarding readability, we have added a table of notations and definitions, as presented in Table 1 of https://anonymous.4open.science/api/repo/ICML-2025-ID-10862-Rebuttal/file/notation-table.pdf. | null | null | null | null | null | null |
FSL-SAGE: Accelerating Federated Split Learning via Smashed Activation Gradient Estimation | Accept (poster) | Summary: The paper introduces FSL-SAGE, a framework designed to address the limitations of Federated Learning (FL) and Split Learning (SL). FL struggles with training large models due to client-side memory constraints, while SL incurs high communication latency due to sequential processing. FSL-SAGE combines the data parallelism of FL with the model-splitting efficiency of SL by employing client-side auxiliary models to estimate server-side gradients. These auxiliary models are periodically aligned with the server to minimize communication overhead while preserving accuracy.
Algorithm: Clients use auxiliary models to estimate server gradients locally, enabling parallel training and reducing reliance on frequent server communication. A "lazy" variant further reduces communication by freezing auxiliary models after alignment.
Convergence Guarantees: Theoretically, FSL-SAGE achieves an O(1/√T) convergence rate, matching FedAvg, despite reduced communication costs.
Empirical Results: Experiments on ResNet-18 (CIFAR-10/100) and GPT2-medium (E2E dataset) demonstrate superior accuracy and communication efficiency compared to FedAvg, SplitFed, and CSE-FSL. FSL-SAGE reduces communication costs by up to 10× while maintaining robustness to data heterogeneity.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: The paper claims that SFL is increasingly relevant for LLM pretraining and fine-tuning in the introduction, yet the experimental validation primarily relies on small-scale models like ResNet-18. This creates a mismatch between the stated motivation and empirical evidence. For instance, ResNet-18 can be easily deployed on edge devices in its entirety, rendering model splitting unnecessary and undermining the practicality of the proposed method for such scenarios. The lack of experiments with prevalent open-source LLMs (e.g., LLaMA 3, or Qwen) weakens the claim about SFL-SAGE’s applicability to modern foundation models. The authors should either substantiate their claims with experiments on larger, cutting-edge LLMs or refine their motivation to align with the evaluated scenarios.
Theoretical Claims: I have carefully examined the correctness of the theoretical proofs and found no apparent errors. The authors have provided sufficient justifications and detailed explanations, ensuring the validity of their claims.
Experimental Designs Or Analyses: The experimental designs and analyses exhibit both strengths and limitations in validity:
The use of Dirichlet-sampled non-IID data** rigorously tests robustness to heterogeneity, and communication cost metrics align with real-world federated constraints.
Limitations:
1. Experiments on ResNet-18 (edge-deployable without splitting) and GPT-2-medium (outdated for modern LLM scales) fail to validate claims about "large-model" efficacy.
2. The auxiliary model architecture (a subset of the server model) is fixed without ablation studies. This leaves uncertainty about how auxiliary design (e.g., depth, width) affects performance, especially for larger models.
3. The alignment interval l = 10 is arbitrary; no sensitivity analysis explores how varying l impacts accuracy or communication trade-offs.
4. While FedAvg and SplitFed are included, newer hybrid FL-SL methods (e.g., AdaptSFL[1], CPSL[2]) are absent, raising questions about comparative novelty.
[1] Lin, Zheng, et al. "Adaptsfl: Adaptive split federated learning in resource-constrained edge networks." arXiv preprint arXiv:2403.13101 (2024).
[2] Wu, Wen, et al. "Split learning over wireless networks: Parallel design and resource management." IEEE Journal on Selected Areas in Communications 41.4 (2023): 1051-1066.
Supplementary Material: I have carefully reviewed the supplementary material, including the proofs and additional experimental results.
Relation To Broader Scientific Literature: The key contributions of FSL-SAGE are situated within the evolving landscape of federated and split learning, addressing critical gaps identified in prior work. FL, epitomized by FedAvg (McMahan et al., 2016), prioritized data parallelism and privacy but assumed clients could train full models—an impracticality for modern large-scale architectures. Split learning (SL) (Vepakomma et al., 2018) relaxed client memory constraints via model splitting but introduced sequential bottlenecks, as seen in SplitFed (Thapa et al., 2022). Recent efforts like CSE-FSL (Mu & Shen, 2023) reduced communication via local losses but lacked server feedback, risking accuracy degradation. FSL-SAGE bridges these paradigms by introducing auxiliary models to estimate server gradients locally, enabling parallelism while preserving server guidance. The theoretical O(1/√T) convergence rate aligns with FedAvg’s guarantees, demonstrating that split training need not sacrifice convergence speed despite added complexity.
Essential References Not Discussed: Many recent works have explored issues related to client-server communication and convergence in SFL; however, these aspects are not addressed in the manuscript.
[1] Lin, Zheng, et al. "Adaptsfl: Adaptive split federated learning in resource-constrained edge networks." arXiv preprint arXiv:2403.13101 (2024).
[2] Wu, Wen, et al. "Split learning over wireless networks: Parallel design and resource management." IEEE Journal on Selected Areas in Communications 41.4 (2023): 1051-1066.
[3] Oh, Seungeun, et al. "Locfedmix-sl: Localize, federate, and mix for improved scalability, convergence, and latency in split learning." Proceedings of the ACM Web Conference 2022. 2022.
[4] Lin, Zheng, et al. "Hierarchical split federated learning: Convergence analysis and system optimization." arXiv preprint arXiv:2412.07197 (2024).
Other Strengths And Weaknesses: Strengths:
1. The use of an auxiliary model to approximate activation gradients instead of directly computing the loss locally is an innovative approach.
2. The experiments on communication overhead are comprehensive and provide valuable insights.
Weaknesses:
1. The manuscript lacks sufficient experiments on truly large models that necessitate split training. Additionally, the chosen models and datasets are somewhat outdated.
2. The baseline selection is relatively limited, lacking comparisons with closely related studies on SFL.
3. The experiments are insufficient, particularly in terms of exploring the impact of hyperparameters and conducting ablation studies. These aspects should be addressed to strengthen the overall analysis.
Other Comments Or Suggestions: No.
Questions For Authors: 1. It would be beneficial to indicate the second-best results in the experimental figures, as this could provide additional insights for the result analysis. However, this aspect is not discussed in the manuscript.
2. The manuscript would benefit from a discussion on how system data heterogeneity impacts the final convergence speed and convergence stability. This aspect is crucial for understanding the robustness of the proposed method under varying conditions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. Please refer to the abbreviations in reviewer **HNw2**'s rebuttal.
> **Comment 1:** Experimental validation relies on small-scale models. Weakens claim that FSL-SAGE applies to large models.
**Response:** We appreciate the reviewer's point about using large-scale LLMs for our experimental evaluation. It is true that smaller models like ResNet-18 or GPT-2m are not large enough to necessitate splitting. However, our goal is to demonstrate the efficacy of our method in relation to other state-of-the-art methods for a given ML model. We make no assumptions on the model architecture or size, so we would expect our theoretical results to hold even for large models. We are very interested to test larger LLMs in our framework, the manual process required for implementing a split architecture and the AM is a challenge for us. However, we didn't have sufficient resources to implement them for larger models in this rebuttal. If this paper is accepted, we will share our source code and welcome collaborations to train larger LLMs.
> **Comment 2:** AM architecture is fixed without ablation study. How does AM size affect performance?
**Response:** Please see our responses to **Comments 4** and **6** by **iZqi**. We will add an ablation study of the AM size to the revision.
> **Comment 3:** The alignment interval $l = 10$ chosen without ablation study. Sensitivity analysis missing.
**Response:** We thank both reviewers **bte9** and **vMHm** for raising this point. We will include an ablation study analyzing the impact of $l$ in the next revision. However, we would like to note that our **Theorem 4.8** has shown the trade-off effect of the alignment interval $l$. In fact, the bound in Eq. (14) applies to every $l^{th}$ communication round. Thus, if $l$ is very large, the algorithm would take longer to convergence, which makes sense intuitively, since the AMs would get aligned infrequently thus mostly misguiding the client-model. On the other hand, if $l$ is too small the first few iterations are spent in overtraining the AMs to mimic a near randomly initialized SSM. For our experiments, we manually tuned $l$.
> **Comment 4:** Newer hybrid methods like AdaptSFL, CPSL, not included in baselines. Important references missing.
**Response:** From an optimization perspective, AdaptSFL and CPSL are the same as SplitFed and vanilla SL, respectively. These hybrid methods either change the structure of model splitting or the sequence of parallel or sequential operations in SL, but these do not impact the algorithmic convergence of the optimization of the model. Also, the communication efficiency of AdaptSFL and CPSL would be a scaled version of vanilla SL or SplitFed, since these methods do not cut down on the communication cost due to client-server messages at every local iteration, which are both included in our baselines. For these reasons, these two works do not represent new baselines. However, we thank the reviewer for pointing out these and additional references and, we will cite them in the revision.
> **Comment 5:** 1. Experiments use outdated models/datasets. 2. Baselines are limited. 3. Insufficient ablation studies.
**Response:** Please see our responses to **Comment 1, 2** and **3**. We plan to strengthen our experiments by adding ablation studies for $l$ and AM size.
> **Comment 6:** Second-best results in experimental figures.
**Response:** Thank you, we will indicate the second-best results in our figures.
> **Comment 7:** How does system/data heterogeneity affect convergence speed?
**Response:** In our theoretical analysis, we allow for system and data heterogeneity via the quantities $\sigma_l$, i.e., the s.t.d. of the mini-batch of data from its true value, and $\sigma_g$, the s.t.d. of the client's loss function from the loss overall loss function. The expressions in **Theorem 4.3, Eqs. (3)** and **4.8, Eq. (14)** indicate the effect of these quantities on the convergence rate. Larger values of these variance terms decrease the rate of convergence, as is the case with other FL methods. We appreciate the reviewer for raising this point, and we will add these discussions in our revision. | Summary: The paper studies split federated learning with a focus on reducing training latency/communication. It uses an auxiliary model to facilitate the computation of cut-layer gradients. To mitigate potential accuracy drop, the paper aligns the auxiliary model with the server-side model periodically. A solid convergence analysis is provided for the proposed algorithm, and some experiments are given to showcase the effectiveness.
Claims And Evidence: The claims are provided clearly and convincingly with theoretical and/or numerical evidence.
Methods And Evaluation Criteria: The choice of models and datasets is suitable and is widely adopted in other related work on (split) federated learning.
Theoretical Claims: I have quickly checked the proofs of the theorems, and no obvious errors have been spotted.
However, there are some issues.
- In Theorem 4.3 Eq. (3), it seems that $c$ is a typo (originally defined for clients). Can the authors specify its physical meaning?
- Theorem 4.3 is agnostic to $l$, i.e., how frequently the auxiliary models are aligned with the server-side model. Does this mean the convergence is not affected by $l$, or the bound is loose?
Experimental Designs Or Analyses: The experimental results well presented and analyzed. While the reviewer appreciates the theoretical contributions, more experiments (in particular ablation studies) are needed to demonstrate the effectiveness of the approach.
- Han et. al 21 seems to be the most relevant work to this submission. However, it has not been included in the benchmark. If I understand correctly, FSL-SAGE is a generalized version of that in Han et. al 21, where one could use $l=\infty$. At least an ablation study of different choices of $l$ should be included.
- In line 369, the authors ``arbitrarily'' chose the structure of the auxiliary model. This seems to be inappropriate, as one can imagine this choice will have a non-trivial impact on the model performance. Also, the model cut strategy should be investigated empirically as well.
- It is suggested to include training latency (e.g., in wall clock time) as a metric.
- Sec. 4.2-4.3 included more assumptions to establish the convergence results, and those assumptions should be empirically justified. For example, can the authors quantify the value of $\epsilon$ for the proposed algorithm?
Supplementary Material: I have quickly reviewed Sec. A-B and reviewed Sec. C in detail.
Relation To Broader Scientific Literature: - A convergence analysis is provided for split federated learning with client-side update using auxiliary models.
Essential References Not Discussed: Since the main contribution is a convergence analysis for split federated learning, the authors are suggested to include the following references on split (federated) learning convergence, and discuss the differences/improvements.
- Li, Yipeng, and Xinchen Lyu. "Convergence analysis of sequential federated learning on heterogeneous data." NeurIPS, 2023.
- Han, Pengchao, et al. "Convergence analysis of split federated learning on heterogeneous data." NeurIPS, 2024.
Other Strengths And Weaknesses: - The paper implemented LoRA fine-tuning of LLMs in a split federated learning environment, which is very relevant and could benefit future research efforts.
Other Comments Or Suggestions: The current version of Fig. 1 is a bit confusing, and the authors may consider adding steps/ordering.
The paper could benefit from proofreading and correcting a list of typos. Here are a few examples.
- In line 71, ``due'' -> ``due to''
- In line 396, CIFAR-100 should be corrected to be CIFAR-10.
It would be better to include codes for reproducibility purposes.
Questions For Authors: Can the authors specify what technical challenges model alignment brings to the proofs and/or implementations compared to those in Han et. al 21?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. Please refer to the abbreviations in reviewer **HNw2**'s rebuttal.
> **Comment 1:** In Theorem 4.3, physical meaning of $c$?
**Response:** Thank you, we missed defining the constant $c$ in **Theorem 4.3**, but it is defined later in **Theorem 4.8**. We will add the definition in **Theorem 4.3** in the next revision. $c < 0.5 - 20K^2 \eta_L^2 \widehat{L}_c^2$ is a positive constant defined to simplify the final expression of convergence.
> **Comment 2:** Theorem 4.3 is agnostic to $l$; does convergence depend on $l$?
**Response:** **Theorem 4.3** is indeed agnostic to $l$, the alignment interval, not because the convergence is unaffected by $l$ or that the bound is loose, but because $l$ directly affects the _auxiliary estimation error_, $\varepsilon^t$, given in **Theorem 4.3**, left-hand-side of Eq. (5). Our final result in **Theorem 4.8** Eq. (14) is obtained by further bounding $\varepsilon^t$. There, we have a bound on every $l^{th}$ round, which implies that a larger $l$ would need a proportionally larger number of rounds $T$ (i.e., slower) to converge and vice-versa.
> **Comment 3:** [1] though most relevant, is not included in baselines.
**Response:** We did not include [1] in our baselines for the following reasons:
* [1] uses multiple copies of the SSM, one for each client, and also uploads cut-layer activations to the server at every iteration, making it very memory and communication inefficient.
* In CSE-FSL [2], which is a baseline for our work, the authors demonstrated that they performed much better than Han et. al [1] in terms of communication efficiency. Thus, it suffices to compare with [2].
* Lastly, our method is not a generalization of [1]. The following are two important differences between our method and [1]:
1. [1] updates the AMs via local loss functions at each local iteration, while we do not. This means that even as $l\to\infty$, our method will behave very differently.
2. Han et. al [1] used multiple SSMs, one for each client, while we use only 1 SSM.
[1] Han, Dong-Jun et al. “Accelerating Federated Learning with Split Learning on Locally Generated Losses.” (2021).
[2] Mu, Yujia, and Cong Shen. "Communication and storage efficient federated split learning." ICC 2023-IEEE International Conference on Communications. IEEE, 2023.
> **Comment 4:** Arbitrary choice of AMs has non-trivial impact on performance. Cut strategy should also be investigated.
**Response:** The choice of an appropriate AM architecture and cut-layer are interesting problems in their own right, but somewhat out of scope of this work. Like all other SL methods [1-2], we focus on a given split and our work works with **general models**. Similarly, we use a given AM and compare to CSE-FSL [2]. As we mention in **Comment 6** below, we will include an ablation study on AM sizes in the next revision.
> **Comment 5:** Training latency as a metric.
**Response:** We will estimate the latency assuming a fixed communication and computation bandwidth between the clients and server. We will include this simulated wall-clock result to the revised manuscript.
> **Comment 6:** Empirically justify $\epsilon$ in learnability assumption.
**Response:** This is also a response to **bte9**'s **Comment 3**. It is intractable to check if a given AM is in-expectation PAC learnable or not. However, in our experiments, an AM which is chosen by taking a small portion (of size $\leq 0.1\times$) of the respective SSM demonstrates comparable, if not better convergence to the tested baselines. We would also like to note that **Assumptions 4.6** and **4.7** are sufficient but not necessary conditions for convergence. In our revision, we will add an ablation study of the AM size on performance. There, we will also test smaller AMs than the ones presented in the paper.
> **Comment 7:** Include references for convergence of SplitFed.
**Response:** We thank the reviewer for pointing out the references on convergence analyses of sequential FL and SplitFed. We will add these to the next revision.
> **Comment 8:** Some typos; inclusion of source code.
**Response:** We thank the reviewer for detecting these oversights. In the next revision we will:
* Proofread and fix typos.
* Add numbers to **Fig. 1** to indicate the order of operations.
We will also share our source code after the review process (we have already shared our code as supplementary material in a `.zip` file for the review process).
> **Comment 9:** Technical challenges faced in proofs and implementation.
**Response:** While [1] originally introduced the idea of using auxiliary models to train clients in parallel, our work involves aligning the auxiliary model directly to _mimic_ the server-side model. This brings about several technical challenges in convergence analysis, all of which are detailed under the heading **2) Technical Challenges, Lines 70-90** in our manuscript.
---
Rebuttal Comment 1.1:
Comment: My comments have been satisfactorily addressed. I am ready to raise my score to 4.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their valuable feedback, and for helping us improve the quality of our manuscript. | Summary: The paper addresses the computational burden faced by clients when performing local updates on whole (possibly large) models by splitting the model into server-side and client-side components, following the approach used in prior split learning (SL)-based methods. Unlike previous works, the authors propose an approach to estimate gradient feedback from the server using an auxiliary model on the client side, which significantly reduces latency and communication overhead.
Specifically, the true cut-layer gradient is periodically received from the server, and the auxiliary model is trained using an MSE loss to approximate the gradient. The paper provides theoretical convergence analysis and demonstrates the broad applicability of their method by conducting the experiment on language model as well.
Claims And Evidence: The claims in this study are supported by experimental evaluations and detailed analyses.
Methods And Evaluation Criteria: The proposed method is conceptually well-grounded, and the evaluation is appropriate.
Theoretical Claims: The convergence bound is rigorously analyzed, including both FSL-SAGE and its lazy variant.
Experimental Designs Or Analyses: The experiments and evaluations are well designed and valid.
Supplementary Material: I have read the sections discussing supplementary results and detailing a few key lemmas.
Relation To Broader Scientific Literature: The paper builds on prior SL based methods for addressing key challenges in FL, e.g., computational efficiency, communication overhead, and latency. The paper introduces an auxiliary model for gradient feedback estimation, significantly reducing latency and communication costs. The authors also provide rigorous convergence analysis for both FSL-SAGE and its lazy variant, ensuring theoretical foundation of the paper.
Essential References Not Discussed: I have not noticed any key references that were missing from the paper.
Other Strengths And Weaknesses: ### Strengths
- The proposed method effectively addresses the challenges of existing SL-based approaches by introducing an auxiliary model to approximate gradient feedback.
- The gradient feedback estimation mechanism significantly reduces communication overhead and latency while ensuring high performance.
- A convergence analysis is provided, covering both FSL-SAGE and its lazy version.
- Experimental results on language model are presented, demonstrating the broad applicability of their method.
### Weaknesses
- Sending the alignment dataset to the server poses a potential risk of privacy leakage.
- The auxiliary model is not particularly small. it appears to be even larger than the client-side split model, which may impose additional computational burden on the client compared to traditional SL-based methods.
- From the server-side perspective, the optimization of $x_a$ is required for 'all' the clients. As suggested by the equation (2), if I understand correctly, this optimization might require double derivative with respect to $x_a$, which could introduce a significant computational burden on the server.
Other Comments Or Suggestions: I appreciate the careful consideration of the alignment mechanism in the convergence analysis, and the attempt to analyze the lazy version as well. These aspects add significant value to the paper.
Regarding the convergence rate, while the rate with respect to $T$ matches existing results, one of the key focuses is how the rate is expressed in terms of the number of clients. In fact, a linear speedup with respect to the number of clients is often achieved in recent FL works. However, in the convergence presented in this paper, it is unclear how this aspect is handled.
Additionally, while uploading the alignment dataset itself poses a potential privacy leakage risk, this issue is not discussed in the paper. Including a discussion on this aspect would strengthen the paper.
Questions For Authors: Please refer to the weaknesses and comments parts.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. For brevity we will use the abbreviations FSL for federated split learning, FL (SL) for Federated (resp. Split) Learning, CSM (SSM) for client-side (resp. server-side) model, and AM for auxiliary model.
> **Comment 1:** Sending alignment dataset to server poses potential privacy risk.
**Response:** We agree that there are privacy concerns about uploading cut-layer data to the server, but one can argue that this risk would be equally present in any SL algorithm which sends smashed data to the server. The study of attacks and privacy guarantees for our algorithm is indeed important, which deserves a separate paper and is beyond the scope of this manuscript. Also, we note that the FSL setting in this paper is the same as in other SL algorithms published in the literature, hence having the same privacy performance.
> **Comment 2:** The AM size is larger than the CSM; imposes compute burden on clients.
**Response:** We agree that the AMs used in our experiments are large. Please see our response to **Comment 6** of reviewer **iZqi**. We will conduct a more thorough ablation study to demonstrate the effect of AMs size in our revision.
> **Comment 3:** Optimization of $x_a$ requires double derivative, causing compute burden on server.
**Response:** It is true that the alignment of AMs requires a double derivative, and this could pose a computation burden, especially if we use large AMs. Our ablation experiments on the AMs, as discussed above, will help clarify if this is indeed a significant burden.
In our experiments with GPT2m, we used an AM of size 92.4M and didn't face very high compute burdens.
> **Comment 4:** In theoretical results, does FSL-SAGE enjoy linear speedup?
**Response:** Thank you for raising this interesting point, it will help clarify our manuscript. Unfortunately all FSL methods using a single server-model, including our method, which handles one client at a time, lacks linear speedup [1]. Primarily, this is because the server-side model is trained sequentially on cut-layer activations received from the clients. We will add a note about this aspect to the revision.
[1] Han, Pengchao, et al. "Convergence analysis of split federated learning on heterogeneous data." arXiv preprint arXiv:2402.15166 (2024). | Summary: The paper proposed a new federated split learning algorithm called FSL-SAGE. It builds upon existing works on local split federated learning, where client updates are derived from local approximations using an auxiliary model attached to each client. A key challenge with previous approaches is that, due to the lack of feedback from the server, these local auxiliary modules can drift away from the optimal solution. FSL-SAGE addresses this issue by periodically aligning these auxiliary modules via an additional optimization loop conducted on the server side.
Claims And Evidence: The core contribution of this work is rather marginal. The core element of this technique is the server feedback to the client’s auxiliary network which is carried out indirectly via periodic alignment of these modules on the server-side is straightforward and has been discussed before.
Methods And Evaluation Criteria: The authors use decent datasets and large models for evaluation, and the baselines are generally appropriate. However, the setting of a single local epoch is limiting and not widely practiced. Additionally, the manual nature of model splitting may hinder generalization to other real-world scenarios. Moreover, the number of clients used in the experiments is very low, whereas typical federated learning setups involve hundreds of clients with partial participation.
Theoretical Claims: 1. The proof sketch appears technically sound, and most assumptions are standard. However, the in-expectation PAC learnability assumption, though theoretically mild, may be challenging in practice since it requires the auxiliary model to be sufficiently expressive (large enough).
2. In previous works, the auxiliary network was kept small because it was not intended to fully replace the server model, whereas in this paper, a larger auxiliary network is used, even larger than the client model itself.
3. Additionally, the paper assumes an honest server, yet in practice, the server could be prone to various attacks. A discussion on potential attacks, particularly regarding privacy concerns, is missing.
Experimental Designs Or Analyses: 1. The paper does not analyze the computational load on the server side, which is crucial for large-scale federated learning scenarios where thousands of devices participate simultaneously. Storing alignment datasets for every client and running the secondary optimization loop might create a bottleneck on the server.
2. Also, the added computation due to increased auxiliary network size as compared to small aux networks in existing techniques should be discussed.
3. The paper does not provide guidance for practitioners on how to select the optimal hyperparameters introduced e.g. the alignment period, neither does the paper provide an ablation study.
4. Given that the key hypothesis is that periodic alignment is what makes the difference, an analysis of auxiliary gradient error over time is essential to confirm it.
5. Wall-clock time comparison or an estimate would be great to understand the potential latency introduced by the alignment overhead.
Supplementary Material: I skimmed through the supplementary material, which provides the convergence analysis and additional experimental details. It generally supports the main text.
Relation To Broader Scientific Literature: The paper is well-positioned within the federated and split learning literature. However, it would benefit from discussing privacy analyses and through evaluation of the trade-offs introduced.
Essential References Not Discussed: Since FSL-SAGE involves sending smashed data (activations) and labels to the server, the paper should reference works that analyze privacy risks and provide mitigation approaches.
The periodic alignment of these auxiliary modules in the centralized setting is discussed in a related paper LSL-PGG[1] which should be cited.
[1] Bhatti, Hasnain Irshad, and Jaekyun Moon. "Locally Supervised Learning with Periodic Global Guidance." arXiv preprint arXiv:2208.00821 (2022).
Other Strengths And Weaknesses: 1. A potential bottleneck for the system is when client partially participates. In the low participation ratio, the server still has to store alignment datasets for every client. Also it is unclear how outdated data is managed when clients drop out.
2. The choice of the split cut-layer determines the size of the smashed activation, which can be substantial. In this method, the server must store these activations for each client for alignment, potentially imposing a significant computational and memory burden. A more detailed analysis of this aspect is needed.
3. The size of the auxiliary network is quite large compared to existing baselines. For ResNet-18, the auxiliary network (2.1M parameters) is nearly three times the size of the client’s original network, and it is even larger (92.4M parameters) for the language task. I would like to see how does it compare to the baselines when the aux networks are used as suggested by those works.
Other Comments Or Suggestions: The paper has some typos, e.g. “limtations” and “sever-side”. The text “For LoRA finetuning” can be misleadin, consider clarifying the context.. A brief section mentioning limitations would be helpful. Also, readers might wonder: does sending labels to the server compromise privacy? A line addressing this could be added for reader to understand the potential risk associated with FSL techniques.
Questions For Authors: 1. Does the client-side auxiliary module also get updated during the local update step?
2. What happens if the auxiliary network is smaller? Is there a minimum size required for the auxiliary network to effectively serve as a surrogate for the server model?
3. Can you provide more insight or data on choosing the alignment interval? The theory gives a trade-off, but practically how does it affect the performance.
4. How does the server scale when the number of clients increases significantly (e.g., hundreds or thousands)?
5. How do you justify the trade-off between increased auxiliary network size and overall performance, especially considering the potential burden on client devices?
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions. Due to space limitation, we could only respond to a subset of more critical comments in this rebuttal. But we are happy to continue to complete our responses to your remaining comments in the discussion stage when new space opens up. Please refer to the abbreviations in **HNw2**'s rebuttal.
> **Comment 1:** Marginal contribution; server feedback already mentioned in [1]. Periodic alignment is related to LSL-PGG[1] which should be cited.
Thank you for pointing us to Ref. [1]. We would like to clarify that our core contributions are distinct from Refs. [1-2] as follows:
* Unlike LSL-PGG [1], which is a centralized algorithm with a goal to reduce training memory footprint, our work, is a **federated** split learning algorithm designed to train large models using FL on data distributed over commodity hardware.
* In [1] the AMs are updated indirectly via a global loss update. In contrast, we update the AMs directly to _mimic_ the SSM by minimizing the MSE loss.
* To the best of our knowledge, our approach is the first AM based FSL approach to have a convergence guarantee on the joint model. Previous works [2] only provide separate convergence guarantees for the CSM and SSM, which do not imply convergence of the join model to a minimum.
We will cite [1] and clarify these differences in the next revision
[1] Bhatti el al. "Locally Supervised Learning with Periodic Global Guidance." arXiv preprint arXiv:2208.00821 (2022).
[2] Mu el al. "Communication and storage efficient federated split learning." ICC 2023-IEEE International Conference on Communications. IEEE, 2023.
> **Comment 2:** Single local epoch setting, small number of clients $m$, and manual splitting impractical.
1. **Single Epoch:** We chose the local epoch to be 1 in our experiments primarily for a fair comparison to previous literature [2], which also has the local epoch as 1. To clarify, our algorithm statement and theoretial analysis makes **no such assumption** regarding the number of local iterations or local epochs.
2. **Manual Splitting:** Our framework does **not** depend on the splitting technique used, and we do **not** used the term 'manual splitting' anywhere in our manuscript. Developing an algorithm that can optimally split a model would indeed be very interesting, but this fundamental topic deserves a seperate paper that is beyond the scope of this manuscript. We will pursue this topic in our future studies and we thank the reviewer for pointing out this direction.
3. **Number of Clients:** Our theoretical analysis reveals that $m$ in the FL setup does not impact our convergence rate. We have only simulated 10 clients in our experiments due to resource limitations. We note that our shared source code is configurable to accomodate for much higher $m$ given enough GPU resources.
> **Comment 3:** 1. In-Expectation learnability assumption challenging in practice. 2. AMs are larger that the CSMs adds to client computation. What is the trade-off between AM size and performance? 3. How do we compare to baselines when smaller AMs are used.
* Please see our response to **Comment 6** by **iZqi**.
* In the next revision, we will also compare our method against baselines using smaller AMs as you suggested.
> **Comment 4:** Assumes honest server; Include discussion of potential privacy concerns. Cite works analyzing privacy risks. Does sending labels to server compromize privacy?
* Please see our response to **Comment 1** by **HNw2**. Thank you for pointing us to these references. We will cite them in our revision as you suggested.
* Sharing labels is the same in all FSL methods. Hence, our method achieves the same privacy performance as in other FSL methods.
> **Comment 5:** 1. Server computational load in large-scale FL; Storage and computation bottleneck due to alignment. 2. How does server deal with partial client participation and client drop-out?
Storing the alignment dataset at the server and alignment can indeed be a bottleneck at the server. To mitigate the need for storage, a simple solution is to align the AMs on the most recent batch of cut-layer activations on an on-demand basis, thus also solving the problems associated with client drop-out and partial client participation. Also, one can perform alignment in another process on the server so it doesn't bottleneck the SSM optimization. While these are interesting engineering extensions to our algorithm, they do not affect our theoretical claims and results, which are our main contributions.
> **Comment 6:** Key hypothesis is periodic alignment; analysis of auxiliary gradient error is essential.
Thanks for the comment. We analyze the auxiliary gradient error given by Eq. (5) in the manuscript, in **Section 4.2**. We can bound this term as $\mathcal{O}(1/\sqrt{T})$ provided the in-expectation learnability assumption holds, which shows that the gradient error decreases with the same sub-linear rate. | null | null | null | null | null | null |
What If We Recaption Billions of Web Images with LLaMA-3? | Accept (poster) | Summary: The authors finetune a better Llava-1.5 model using a more advanced Llama model. The authors then recaption the DataComp-1B dataset and show promising results in terms of ImageNet zero-shot accuracy and text-to-image retrieval benchmarks. The authors finally use the CLIP score to show that the recaptioned-caption is better correlated with the image than the initial web-scraped caption.
Claims And Evidence: The claims are well supported.
Methods And Evaluation Criteria: Strengths:
- The dataset itself is very useful to researchers. For example, it could be used to train better open-sourced CLIP models.
Weaknesses:
- There are no algorithmic contributions. The authors use off-the-shelf methods.
- In my opinion, there is some serious misalignment between what the authors are trying to say about their dataset and what type of results the authors show in the tables. The authors show that their generated caption is more descriptive; consequently a CLIP model trained on the recaptioned dataset does not do well in zero-shot classification (see Table 3 p=0.0). Even for text-to-image retrieval and image-to-text retrieval tasks, the authors have to set p to 0.8 (a larger portion of original captions) to see some recall improvement. This paper could benefit from more complex benchmarks that test the model's understanding of the image (such as image captioning and VQA).
- I know DataComp is derived from LAION, which was taken down a year ago. Is this data still publicly available?
Theoretical Claims: N/A
Experimental Designs Or Analyses: As explained in the "Methods And Evaluation Criteria" section, I don't think the zero-shot and cross modal retrieval benchmarks are the best metrics to showcase the superiority of the new recaptioned dataset. (Tables 2,3,4,5).
I do think Table 6's experiment on text-to-image generation is a step in the right direction. However, Table 6 is less comprehensive and not the focus of the paper.
Supplementary Material: No
Relation To Broader Scientific Literature: The proposed novel dataset is a significant contribution to the multimodal literature.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: See above.
Other Comments Or Suggestions: Page 5 line 227: "We can observe ..." This sentence refers to some numbers that are not cited. Please cite the corresponding table.
Questions For Authors: Is this dataset derived from the LAION dataset with safety fixes (https://laion.ai/blog/relaion-5b/)? If not, the impact of this work is limited, because you can't release the dataset.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ### **Q1: No algorithmic contributions.**
**A1:** While we don’t introduce new recaptioning algorithms, we highlight two core contributions:
Recap-DataComp-1B, the first fully open, billion-scale image-text dataset with synthetic captions generated using LLaMA-3. Prior works [1,2] are proprietary and closed-source. Scaling recaptioning with advanced LLMs at this scale is unprecedented and enables reproducible, large-scale research in multimodal learning.
This dataset supports the first public large-scale evaluation of CLIP and T2I diffusion models trained on high-quality synthetic captions. Our experiments show notable gains in cross-modal tasks, long-context understanding, and image generation, making Recap-DataComp-1B a valuable resource for the community.
[1] Improving Image Generation with Better Captions. 2023
[2] A Picture is Worth a Thousand Words: Principled Recaptioning Improves Image. 2023
---
### **Q2: Misalignment between their dataset’s advantage and results.**
**A2:** Our main goal is to improve noisy web captions by aligning them better with images. Hence, we focus on tasks like cross-modal retrieval that directly measure image-caption alignment. We also include human and GPT-based evaluations (Sec. 4.2 and Appendix A) to assess caption quality. In classification, training only on synthetic captions degrades CLIP performance. However, mixing just 10% of original captions substantially recovers accuracy, confirming that original captions still play a key role in avoiding data collapse. In contrast, T2I diffusion benefits from detailed, synthetic captions. Setting the mixing ratio to p=0.1 (i.e., 90% recaptions) leads to better generation results since richer captions produce better text latents.
---
### **Q3: More complex benchmarks (such as image captioning and VQA). Table 6 is less comprehensive.**
**A3:** Thank you for the suggestion. Our goal is to provide a high-quality, large-scale dataset for both understanding and generation tasks. We conducted large-scale CLIP and T2I experiments to demonstrate its effectiveness and potential. Table 6 is limited by the high computational cost—for instance, training CLIP ViT-B/16 for 2 epochs only takes approximately 0.5 days, while DiT-B/4 requires around 7 days for a single epoch on TPU-v3-256. These constraints restricted the number of benchmarks we could include. Furthermore, the current results already demonstrate our dataset’s effectiveness, and we will leave further evaluations to future work.
Following your advice, we adopted the LLaVA framework to evaluate more challenging tasks such as VQA and image captioning. We replaced its pretraining data (558K LAION/CC3M/SBU) with 558K Recap-DataComp-1B samples. Results show consistent gains, including +0.6% across four VQA tasks and +34 points on MME shown in the following table:
| Pre-train Dataset | Tunable Module | Training Steps | TextVQA | MME | VQA-V2 | MM-Vet |
|----------------------------------|----------------|----------------|---------|------------|--------|----------|
| LLaVA-LCS-558K | Projector | 2K | 59.1 | 1489/277 | 77.9 | 34.4 |
| Recap-DataComp-1B (Recaption Only) | Projector | 2K | 60.1 | 1523/260 | 78.6 | 35.1 |
Lastly, we further evaluated the CLIP visual encoder trained with our mix-training strategy on our dataset. Our LLaVA with ViT trained only on synthetic captions (p=0) shows only a slight drop compared to training on the original data (p=1) — an average of 1.2% across four VQA benchmarks (TextVQA, GQA, MM-Vet, SEED). Notably, at p=0.8, LLaVA achieves the best overall performance, matching the p=1 model and surpassing it on MME by 35 absolute points.
| Model | Mix ratio | IN-1K | Text VQA | GQA | MME | MM-Vet | SEED |
|-------|-----------|-----------------|----------|------|------|--------|------|
| B/16 | p=0 | 33.8 | 50.0 | 58.9 | 1335 | 25.0 | 62.8 |
| B/16 | p=0.8 | 69.8 | 52.0 | 60.2 | 1417 | 25.6 | 64.2 |
| B/16 | p=1 | 70.5 | 51.8 | 60.0 | 1382 | 25.6 | 63.9 |
These results demonstrate that by evaluating more diverse and challenging tasks, our proposed dataset not only significantly enhances cross-modal retrieval performance but also reveals promising
---
### **Q4: Is this data DataComp still publicly available? Safety fixes of LAION.**
**A4:** Yes, DataComp-1B remains public and is not derived from LAION-5B but from a different Common Crawl version. The original dataset is accessible at Huggingface Dataset. More importantly, it applied strict NSFW filtering, and we discussed safety considerations in the Appendix. We believe releasing our recaptioned dataset poses no major safety risks.
---
**Other**: on page 5, line 227, we cite the GPT-4V rating. We will clarify this in the revision. | Summary: This paper explores the impact of improving textual descriptions for large-scale web-crawled image-text datasets using LLaMA-3. The authors propose a recaptioning pipeline that fine-tunes a LLaMA-3-8B-powered LLaVA-1.5 model and applies it to ∼1.3 billion images from the DataComp-1B dataset. The resulting dataset, Recap-DataComp-1B, enhances training for vision-language models. Experiments demonstrate that for discriminative models like CLIP, the recaptioned dataset improves zero-shot performance on four cross-modal retrieval tasks. For generative models like Diffusion Transformers, the refined captions enable better alignment with text instructions, particularly in handling complex queries. In general, this paper is well organized.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: yes
Experimental Designs Or Analyses: yes
Supplementary Material: all
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths:
+ The paper contributes an open-source, large-scale recaptioning pipeline, fostering community-wide research in vision-language model training.
+ The study demonstrates quantitative gains for both discriminative and generative models, enhancing retrieval accuracy and text-to-image generation quality.
+ By systematically refining web-crawled captions, the proposed method mitigates inherent noise, leading to more semantically rich and effective training data.
Weaknesses:
- Recaptioning has been explored and verified effective in prior work, like DALL-E 3 [a] and [b]. Compared with existing work, this work seems not to bring impressive discovery, insights , or conclusions.
- The novelty of this work is weak and unclear. I understand this work may focus on empirical verification, but an empirical work also needs to provide some novel ideas or experience to contribute the development of the research community.
- Performance improvement of using regenerated captions over that with raw captions on some tasks (such as the results in Tab 1 and Tab 4) is marginal.
- Weaker performance w/ re-caption on ImageNet-1K in Tab 2 can be observed. Is there any other task that re-caption may be harmful? More analysis may be helpful, which should not weaken the quality of this work, but lets the community pay more attention to the potential disadvantages or risks of recaption.
[a] Improving Image Generation with Better Captions. 2023
[b] A Picture is Worth a Thousand Words: Principled Recaptioning Improves Image. 2023
Other Comments Or Suggestions: NA
Questions For Authors: See above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ### **Q1: Novel ideas, insights, and conclusions.**
**A1:** Thank you for raising concerns about novelty. Our primary focus is on developing **Recap-DataComp-1B**, which we believe to be **the first publicly available image-text dataset at the billion-scale generated by LLaMA-3**. Unlike previous works [1,2] that remain proprietary—both in terms of captioning models and datasets—our approach is fully open and demonstrably scalable to the billion level. We believe this openness and scale set a new milestone in multimodal research.
While the idea of “recaptioning” is not entirely new, we have yet to see it executed at this scale using advanced large language models. By creating recaptions on such a massive level, our dataset enables the first public, large-scale experiments into training models like CLIP and T2I diffusion with high-quality synthetic captions. In our view, **Recap-DataComp-1B** offers a novel and significant contribution to the field, with considerable potential to advance future multimodal research.
[1] Improving Image Generation with Better Captions. 2023
[2] A Picture is Worth a Thousand Words: Principled Recaptioning Improves Image. 2023
---
### **Q2: Marginal improvement on some tasks.**
**A2:** Thank you for the question. As shown in Table 1, the LLaVA-3 **8B** model offers slightly better performance than the **13B** model while being significantly faster. We selected it to balance captioning quality and efficiency.
In Table 4, Recap-CLIP clearly outperforms the previous DataComp-1B baseline trained on the same samples, highlighting the improved quality of our dataset. While our results slightly exceed those of SigLIP, it’s important to note that SigLIP requires four times more training samples on a private dataset, which is five times larger—making our gains particularly noteworthy.
---
### **Q3: Weaker performance on ImageNet-1K. Are more tasks harmful?**
**A3:** Thanks for your constructive suggestion. We observe that training exclusively with synthetic captions degrades CLIP performance. However, introducing a small portion of original captions (e.g., 10%) effectively recovers performance, indicating that original captions remain crucial in preventing data collapse, as noted in prior work [1]. Additionally, we clarify that while recent works like LaCLIP [2] and VeCLIP [3] demonstrate synthetic captions can enhance CLIP training, to our knowledge, no prior work has trained models exclusively on synthetic captions. Our study represents the first public attempt, shedding light on the challenges of this approach.
Second, we further evaluate the visual representations learned from these models through linear probing and full fine-tuning on ImageNet-1K. While the CLIP ViT trained only on synthetic captions underperforms in zero-shot classification, We observe that lightweight tuning (e.g., linear probing or fine-tuning) greatly narrows the performance gap seen in zero-shot settings. Moreover, models trained on mixed captions perform on par with those trained on original data, striking a favorable balance between classification and retrieval performance.
| Model | Mix ratio | Zero-shot | Linear Prob | Fully FT |
|-------|-----------|-----------------|-------------|----------|
| B/16 | p=0 | 33.8 | 76.1 | 83.6 |
| B/16 | p=0.8 | 69.8 | 80.3 | 84.2 |
| B/16 | p=1 | 70.5 | 80.4 | 84.1 |
Following your advice, we also evaluated additional tasks to analyze the impact of recaption following the original DataComp paper across 22 datasets (VTAB contains 13, IN-1K-Shift contains 6, and Retrieval contains 3), shown in the table below. We found that purely synthetic captions negatively impacted zero-shot performance. However, our mix-training strategy still achieves superior performance in VTAB and retrieval tasks.
| Model | Mix ratio | IN-1K | VTAB | IN-1K-Shift | Retrieval |
|-------|-----------|-----------------|------|-------------|-----------|
| B/16 | p=0 | 33.8 | 38.0 | 33.3 | 42.2 |
| B/16 | p=0.8 | 69.8 | 57.5 | 55.5 | 56.6 |
| B/16 | p=1 | 70.5 | 56.8 | 55.6 | 54.9 |
These results indicate that only synthetic captions may negatively impact zero-shot classification performance. Our mixed-training strategy effectively balances these competing factors. We acknowledge that further investigation into potential drawbacks is valuable. Given the demonstrated scalability and high-quality alignment of our proposed dataset, we believe it provides a strong baseline for future research.
---
[1] Seddik M E A, et al. How bad is training on synthetic data? a statistical analysis of language model collapse[J]. arXiv, 2024.
[2] Fan, Lijie, et al. "Improving CLIP training with language rewrites." NeurIPS. 2024.
[3] Lai, Zhengfeng, et al. "VeCLIP: Improving CLIP training via visual-enriched captions." ECCV. 2024. | Summary: This paper explores the impact of recaptioning web-crawled image-text pairs using LLaMA-3. The authors identify that web-crawled datasets (like DataComp-1B) suffer from image-text misalignment and low-quality textual descriptions. Their approach is straightforward: they fine-tune a LLaMA-3-8B powered LLaVA-1.5 model and use it to generate detailed captions for approximately 1.3 billion images from DataComp-1B, creating a new dataset called Recap-DataComp-1B.
The paper demonstrates that these enhanced captions are longer, more diverse, and better aligned with their corresponding images. Through extensive experiments, they show that vision-language models trained on this enhanced dataset exhibit significant improvements: CLIP models achieve an average 3.1% boost in zero-shot cross-modal retrieval performance when trained on a mix of original and recaptioned data, while text-to-image Diffusion Transformers show better alignment with text instructions and produce higher-quality images (demonstrated by improved FID scores and CLIP scores).
Claims And Evidence: Most claims in the paper are well-supported by clear and convincing evidence:
1. The claim that web-crawled image-text pairs are inherently noisy and misaligned is effectively demonstrated through visual examples in Figure 1, comparing original captions with recaptions.
2. The performance improvement of their LLaMA-3-powered LLaVA model over other LLaVA variants is adequately supported by comparisons on MMMU and MM-Vet benchmarks in Table 1.
3. The claim that Recap-DataComp-1B contains higher-quality captions is validated through:
- Word and length distributions (Figures 1 & 3)
- GPT-4V evaluations showing an average rating increase from 3.71 to 4.14
- LongCLIP scores demonstrating better semantic alignment (89.91 vs. 10.09)
4. The benefits of training vision-language models on recaptioned data are thoroughly evidenced in Tables 2-5 for CLIP models and Table 6 for DiT models.
The only claim that could use stronger evidence is the assertion that recaptioning enhances classification performance. The results in Table 3 show that using purely recaptioned data (p=0.0) actually hurts ImageNet classification. While the authors acknowledge this and pick a good compromise at p=0.8 for later ablations, more analysis of this limitation would strengthen the paper.
Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate and comprehensive:
1. The evaluation of caption quality uses a multi-faceted approach:
- Automatic metrics (CLIP, LongCLIP)
- Human-in-the-loop evaluation (GPT-4V)
- Qualitative examples and word distribution analysis
2. For CLIP evaluation, the authors perform extensive ablations:
- Testing different mixing ratios between original and recaptioned data
- Experimenting with various text encoder sizes
- Evaluating on standard benchmarks (ImageNet-1K, COCO, Flickr30K)
- Additional evaluations on challenging datasets (Urban1K, VG-Attribution)
3. For DiT evaluation, they use standard metrics (FID, CLIP scores) as well as GPT-4V scoring, which provides a more holistic assessment of generation quality.
The experimental design includes appropriate baselines and ablations, making the conclusions reliable.
Theoretical Claims: The paper is primarily empirical and does not make significant theoretical claims requiring formal proofs.
Experimental Designs Or Analyses: The experimental designs are sound and well-executed:
1. CLIP training experiments (Section 5) are comprehensive, exploring mixing ratios, text encoder sizes, and evaluating on multiple benchmarks. The decision to use p=0.8 as a compromise between classification and retrieval performance is well-justified.
2. Text-to-image experiments (Section 6) properly evaluate how different mixing ratios affect generation quality, with both quantitative metrics and visual examples.
3. Caption quality analyses (Section 4) use multiple complementary methods to assess semantic alignment and descriptiveness.
A significant limitation, however, is that while the paper extensively discusses how recaptioned data affects downstream models, all experiments are conducted using a single recaptioning model configuration. The paper lacks systematic quantitative analysis of how different settings of the caption model itself (base LLM selection, image encoder variations, training data choices, or whether conditioning on original captions) affect caption quality and subsequent downstream performance. Although the authors provide qualitative examples in the supplementary materials (Figures 7 and 8), quantitative ablation studies on the captioning pipeline itself would significantly strengthen the work and provide insights into which components most strongly influence the quality of the recaptioned dataset.
Supplementary Material: I reviewed all supplementary material.
Relation To Broader Scientific Literature: The paper is well-positioned within the broader scientific literature:
1. It builds upon foundational work on vision-language models like CLIP (Radford et al., 2021) and text-to-image models (DiT, Diffusion Transformers).
2. It clearly differentiates from previous recaptioning approaches like ShareGPT4V (Chen et al., 2023) and LaCLIP (Fan et al., 2024) by emphasizing:
- The use of open-source models (LLaMA-3) rather than closed APIs
- The billion-scale application, which is significantly larger than previous efforts
- The comprehensive evaluation across both discriminative and generative models
3. The authors acknowledge the high costs associated with using API-based approaches like GPT-4V for billion-scale datasets, highlighting the practical significance of their approach.
Essential References Not Discussed: Not noticed.
Other Strengths And Weaknesses: Strengths:
1. The paper tackles an important problem (improving web-crawled image-text data) at an unprecedented scale (1.3 billion images).
2. The approach is practical and accessible, using open-source models rather than closed APIs.
3. The comprehensive evaluation across multiple model types (CLIP and DiT) and tasks demonstrates the broad applicability of the approach.
4. The analysis of optimal mixing ratios provides practical insights for future researchers.
Weaknesses:
1. Limited exploration of prompt engineering: The paper uses a single prompt template for recaptioning. Testing multiple prompting strategies might yield more diverse captions.
2. The degradation in classification performance when using only recaptioned data (p=0.0) is noted but not extensively analyzed.
3. The human evaluation is limited to GPT-4V ratings rather than traditional human evaluation, though the large-scale nature of the dataset makes this a practical choice.
Other Comments Or Suggestions: 1. It would be valuable to analyze potential biases in the generated captions, as they may inherit biases present in the training data of LLaMA-3 and LLaVA.
2. Showing some failure cases where the recaptioning model produces inaccurate descriptions would help understand the limitations of the approach.
3. The appendix mentions experiments with different prompting strategies and conditioning on original captions. These findings could be highlighted more prominently in the main paper.
4. Releasing smaller, curated subsets of the recaptioned dataset would benefit researchers with limited computational resources.
Questions For Authors: 1. You found that training with purely recaptioned data (p=0.0) significantly hurts classification performance on ImageNet-1K. Do you have insights into why this happens, and have you explored techniques to mitigate this issue while preserving the benefits for retrieval tasks?
2. In the appendix, you mention experimenting with conditioning the recaptioning model on original captions. Did you evaluate training vision-language models directly on these condition-based recaptions? Would this approach help with the classification performance drop observed with pure recaptions?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ### **Q1: Analysis of this limitation, insights, and technology to migrate classification performance issues.**
**A1:** Following your valuable suggestion, we analyze the classification drop with synthetic captions and identify three likely causes: (1) CLIP's difficulty learning from long, rich captions [1], (2) captioners’ poor performance on classification tasks [2], and (3) limited evaluation of zero-shot only. We additionally evaluate performance on the standard ImageNet-1K classification task using both linear probing and full fine-tuning. We observe that introducing lightweight tuning significantly reduces the initial performance gap seen in zero-shot settings, indicating that models trained solely on synthetic captions still learn strong visual representations.
| Model | Mix ratio | Zero-shot | Linear Prob | Fully FT |
|-------|-----------|-----------------|-------------|----------|
| B/16 | p=0 | 33.8 | 76.1 | 83.6 |
| B/16 | p=0.8 | 69.8 | 80.3 | 84.2 |
| B/16 | p=1 | 70.5 | 80.4 | 84.1 |
Regarding potential strategies, mixed training already achieves a preferable balance of classification and retrieval, and recent work [3] suggests that advanced CLIP training methods can further exploit synthetic captions.
---
### **Q2: Caption model and quantitative results.**
**A2:** Thanks for bringing up the concern about ablation on caption models. We evaluate four caption models on a 30M subset and the results are shown in the following Table. We find that LLaVA-1.5-LLaMA3-8B consistently yields the best downstream CLIP performance. Larger models like LLaVA-NeXT-Gemma2-27B are significantly slower—requiring 201 days to recaption the full 1 billion samples on the same infrastructure—and also underperform on retrieval tasks. Thus, we chose LLaVA-1.5-LLaMA3-8B for re-captioning the full dataset, balancing quality and efficiency.
| Model | Caption Model | Caption Speed | Mix Ratio | IN1K | Flickr T2I | Flickr I2T | COCO T2I | COCO I2T |
|-------|---------------------------|---------------|-----------|------|------------|------------|----------|----------|
| L/16 | - | - | 1.0 | 66.1 | 48.6 | 65.3 | 30.2 | 41.7 |
| L/16 | LLaVA-1.5-LLaMA3-8B | 382 img/s | 0.6 | 67.5 | 61.1 |77.8 | 39.5 | 54.0 |
| L/16 | LLaVA-next-Gemma2-27B | 54 img/s | 0.6 | 67.1 | 58.9 | 74.6 | 37.0 | 51.8 |
---
### **Q3: Prompt engineering and quantitative results.**
**A3:** We ablate prompt engineering on a 30M subset and found prompt choice influences downstream tasks shown in the following table:
| Model | Prompt Type | Mix Ratio | IN1K | Flickr T2I | Flickr I2T | COCO T2I | COCO I2T |
|-------|------------------|-----------|-------|------------|------------|----------|----------|
| L/16 | - | 1.0 | 66.1 | 48.6 | 65.3 | 30.2 | 41.7 |
| L/16 | Original-prompt | 0.6 | 67.5 | 61.1 | 77.8 | 39.5 | 54.0 |
| L/16 | Concise-prompt | 0.6 | 67.8 | 61.5 | 80.6 | 40.5 | 57.1 |
| L/16 | Diverse-prompt | 0.6 | 68.2 | 63.7 | 81.4 | 42.3 | 57.3 |
| L/16 | Condition-prompt | 0.6 | 68.2 | 62.0 | 78.7 | 40.1 | 55.5 |
Our findings show that even simple prompts, such as the *Original* setup, already significantly enhance retrieval performance at this scale. As the first to release a billion-caption dataset, these promising results motivate us to further expand and refine our evaluations in future work.
---
### **Q4: Limited Human Evaluation.**
**A4:** We present human evaluation results in Appendix A and Figures 4 and 5, which align closely with GPT-4V assessments. We will further emphasize this consistency in future revisions.
---
### **Q5: Potential biases and failure cases in the generated captions.**
**A5:** We acknowledge the presence of biases and failure cases in generated captions inherited from the training data of LLaMA-3 and LLaVA. Addressing and mitigating these biases will be a key focus of our future work. Regarding presenting potential failure cases, one example of a current shortcoming is the pipeline’s inability to correctly identify certain objects—for instance, the “Western Kingbird” shown in Figure 1. We will include a more detailed analysis of failure cases in the next version.
---
### **Q6: Smaller, curated subset.**
**A6:** We will release the ablation subset to support further research.
---
[1] Zhang, Beichen, et al. "Long-CLIP: Unlocking the long-text capability of CLIP." ECCV, 2024.
[2] Zhang, Yuhui, et al. "Why are visually-grounded language models bad at image classification?” arXiv, 2024.
[3] Liu, Yanqing, et al. "CLIPS: An Enhanced CLIP Framework for Learning with Synthetic Captions." arXiv, 2024.
---
Rebuttal Comment 1.1:
Comment: I have read the author's rebuttal and the other reviews. The authors have adequately addressed the concerns raised, including providing additional analysis and results.
While the core recaptioning technique isn't novel, the scale, use of open models, and the public release of this large dataset represent a significant and valuable contribution to the community. The extensive evaluations and insights derived from this work are solid.
Therefore, I maintain my recommendation for 4:Accept.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer ogfc
Thank you for recognizing and acknowledging the value of this work — we truly appreciate it! Please feel free to let us know if you have any other questions.
Thanks
Authors | Summary: The paper introduces Recap-Datacomp-1B a large scale image-text data, where the text is "synthetically generated" using large multimodal models. The authors observed that compared to original web-crawled data, models trained on this generated synthetic texts are performing better for multimodal retrieval and text-to-image generation tasks.
Claims And Evidence: Experimental results corroborate the claim that the recap dataset is better for training multimodal models.
Methods And Evaluation Criteria: Results are shown on standard tasks using acceptable metrics.
Theoretical Claims: NA
Experimental Designs Or Analyses: Experiments are rigorous,
Supplementary Material: There is no separate supplementary materials. But I looked at the appendix at the end of the main paper.
Relation To Broader Scientific Literature: It's understood that the fine-grained aligned multimodal data is essential for learning good image-text models as per our current definition of goodness. That has been the motivation behind creating larger and larger datasets. As observed by the authors, the improvement especially about fine-grained textual input following in text-to-image diffusion models is notable. I'd argue that the research question on the other hand should be about how to incorporate common sense, or reducing dependencies on large scale annotations.
Essential References Not Discussed: I can see that others have trained LLaVa-LLaMa3 models earlier: https://huggingface.co/Intel/llava-llama-3-8b
Is there any reason to train the model again?
Other Strengths And Weaknesses: It's understood that the fine-grained aligned multimodal data is essential for learning good image-text models as per our current definition of goodness. That has been the motivation behind creating larger and larger datasets. As observed by the authors, the improvement especially about fine-grained textual input following in text-to-image diffusion models is notable. I'd argue that the research question on the other hand should be about how to incorporate common sense, or reducing dependencies on large scale annotations.
Other Comments Or Suggestions: Although it's a new dataset with improved fine-grained image-text data, I find its technical innovation is missing.
Questions For Authors: I encourage the authors to provide training time. Currently there has been no mention of the complexity associated with creating this Recap dataset; training models using this newly generated recap dataset.
Ethical Review Concerns: Thank you for providing the ethical statements about how the limitation of model-based filtering might have let some of the unsafe content to be used for training and specifying the copyright details.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for recognizing that our dataset significantly enhances performance in both multimodal retrieval and text-to-image generation tasks. We appreciate your insights into how advanced MLLMs can reduce reliance on costly human annotations and enable greater scalability.
Our Recap data relies solely on LLaVA-based annotation, which **inherently incorporates commonsense knowledge** from a human perspective — since LLaVA is an aligned model trained on human preference data. In other words, our Recap approach not only reduces dependence on manual annotation but also opens up promising directions for future research.
Our experiments further show that mixed-caption training — integrating both CLIP and T2I methods — offers a robust way to incorporate **external** commonsense knowledge, leading to consistent performance improvement.
In essence, our dataset serves as a valuable baseline and starting point for exploring these innovative strategies. We plan to expand on the broader impact of this work in the upcoming version.
---
### **Q1: Why not use earlier LLaVa-LLaMa3 models?**
**A1:** Thank you for bringing up this work. First, we trained our own version to maintain full control over the training process while adhering to open-research practices and a broader licensing framework. Second, our initial investigation revealed that the original LLaVA-1.5 data consistently started its outputs with "This image is a …". To address this issue, we integrated a high-quality dataset (HQ-Edit) into our training process, which alleviated the problem. We will include this essential reference in our next version.
---
### **Q2: Technical innovation.**
**A2:** We would like to stress that our primary focus is on creating **Recap-DataComp-1B**, which, to the best of our knowledge, is **the first publicly available image-text dataset with synthetic captions scaled to the billion level using LLaMA-3**. We believe this represents a novel and significant contribution to the multimodal research community. While the concept of recaptioning is not new, scaling it to this magnitude with advanced LLMs has not been seen before. More importantly, this large-scale dataset enables the first public, extensive, and fair investigations into training CLIP and T2I diffusion models with high-quality synthetic captions. For example, our results comprehensively demonstrate that Recap-DataComp-1B significantly enhances cross-modal tasks, long-context understanding, and text-to-image generation. Based on this evidence, we believe that Recap-DataComp-1B is a novel and important contribution to the community, with the strong potential to provide significant benefits to future multimodal research.
---
### **Q3: Complexity of creating and training models of our dataset.**
**A3:** We benchmark the inference speed of LLaVA-1.5-LLaMA3-8B on the TPU-V4 256 hardware, achieving a throughput of 382 images per second. At this rate, generating captions for the entire Recap-DataComp-1B dataset (~1 billion images) would take approximately 29 days of continuous computation.
Regarding CLIP training, training a ViT-L/16 model for two epochs (~2.56 billion total samples) on Recap-DataComp-1B requires ~ 1 day on TPU-V3 256 infrastructure. For DiT training, training a base-sized DiT model with a batch size of 2048 for 650K steps takes approximately 7 days using TPU-V3 256 hardware.
We will provide comprehensive details about computational complexity and runtime metrics in the next revision. | null | null | null | null | null | null |
CEGA: A Cost-Effective Approach for Graph-Based Model Extraction and Acquisition | Accept (poster) | Summary: This paper introduces CEGA for model extraction attacks against GNNs with limited query budgets. The authors address practical constraints in real-world attack scenarios by designing a node querying strategy that incrementally refines node selection over multiple learning cycles. CEGA integrates three key considerations: representativeness (selecting nodes that capture graph structure), uncertainty (prioritizing nodes near decision boundaries), and diversity (ensuring comprehensive exploration). Experiments on six benchmark datasets demonstrate that CEGA outperforms baseline approaches in terms of accuracy, fidelity, and F1 score under strict query-size constraints. The authors provide theoretical guarantees for their approach and highlight the vulnerability of GNNs to extraction attacks despite resource limitations.
Claims And Evidence: The paper's primary claims regarding CEGA's superiority over baseline methods are generally well-supported by comprehensive experimental evidence. The authors demonstrate this through extensive comparisons on six different benchmark datasets, showing consistently better performance for CEGA across accuracy, fidelity, and F1 score metrics. The performance gap analysis between budget-constrained and full subgraph models further strengthens their claims.
However, the claim that CEGA achieves "cost-effective" attacks would be stronger with a direct efficiency comparison (e.g., computational cost, time complexity in practice) against baseline methods, beyond just the theoretical complexity analysis provided.
Methods And Evaluation Criteria: The proposed methodology is logically sound and well-suited for the problem. The three-component node selection strategy addresses key challenges in model extraction with limited budgets. The evaluation criteria appropriately measure both the performance of the extracted model and its similarity to the target model.
The benchmark datasets represent diverse graph structures and applications, providing a robust testbed. However, the evaluation would be stronger with more real-world MLaaS settings where API rate limits and costs are explicitly modeled, rather than simply using node count as a proxy for query budget.
Theoretical Claims: I reviewed the two main theoretical claims (Theorems 3.1 and 3.2) and their proofs in Appendix B. Theorem 3.1 on complexity evaluation appears correct, leveraging established complexity results for GCN operations. The proof appropriately breaks down computational components and aggregates them to derive the overall complexity. Theorem 3.2 on the existence of a feasible permutation parameter ε is also sound. The proof uses Hoeffding's inequality to establish probabilistic bounds on the perturbation's effect, ensuring stability in the model's predictions. Both proofs support the practical feasibility of the proposed approach, especially for the uncertainty evaluation component.
Experimental Designs Or Analyses: The experimental design is generally robust. However, I identified several limitations:
1. The attack node pool is restricted to 10% of nodes (25% for Cora-Full), which may not reflect real-world knowledge constraints.
2. Hyperparameter selection process isn't fully detailed, raising questions about potential tuning bias.
Supplementary Material: No
Relation To Broader Scientific Literature: No
Essential References Not Discussed: No
Other Strengths And Weaknesses: Other weaknesses:
1. The initial query selection (2 nodes per class) assumes prior knowledge of class distribution, which may be unrealistic in practical attack scenarios.
2. The ablation study removes one component at a time, but doesn't explore interactions between components (e.g., combining only representativeness and uncertainty)
Other Comments Or Suggestions: see above
Questions For Authors: see above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We provide a point-to-point reply below.
**Response to Review on Claims**: Thank you for pointing this out. We would like to clarify that the *cost-effectiveness* we attribute to CEGA refers to two major aspects. First, CEGA shows its ability to **achieve high fidelity to $f_{\mathrm{T}}$ using fewer queried nodes**, which we show in **Figure 1**. In pay-per-query settings, this translates into considerable cost savings for those who aim to replicate an expensive target model under a limited budget. Second, CEGA shows computational efficiency as being **1.5\~4x more efficient** in terms of running time across most tested cases and datasets than AGE, which delivers competitive performance with CEGA on **CoCS** and **CoP**. This observation aligns with CEGA's theoretical efficiency, as we presented in **Section 3.3**.
**Response to Review on Evaluation**: We thank you for your constructive feedback. First, we agree on the importance of assessing CEGA's cost-effectiveness in a simulated real-world setting involving modeled constraints. Our current evaluation setup for CEGA is adaptable for such a setting, as an evaluation on a different number of nodes selected by CEGA or benchmark models can be assessed by modifying the right cutoff for the trajectories in **Figure 1**. We will thoroughly evaluate CEGA's performance and time complexity under simulated real-world MLaaS settings in the next version of our manuscript.
**Response to Experimental Design \#1**: We thank you for your thoughtful feedback. To maintain consistency with common practice in the field, we follow the setup of numerous widely accepted works on MEAs against GNNs, notably [1] and [2]. Heuristically, we want to show CEGA's effectiveness in selecting informative nodes with **partial knowledge** to the target graph that contains **sufficient information** for MEA tasks. We use 25% for Cora-Full due to its significantly higher number of classes. We will provide detailed justification for our MEA framework setup and conduct additional experiments under varying attack node pool sizes in our next version.
[1] Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realisation, Wu et al., ASIA CCS 2022.
[2] Model Stealing Attacks against Inductive Graph Neural Networks, Shen et al., IEEE Symposium on Security and Privacy, 2022.
**Response to Experimental Design \#2**: We thank you for pointing this out. In our practice, we follow the heuristics outlined in **Section 3.2** and design a setup as specified in **Equation (7)**. Specifically, we use **grid search**, which is commonly used in high-impact work on GNN evaluation [1], to find the best combination of hyperparameters for CEGA, namely initial weights $\alpha_1$, $\alpha_2$, $\alpha_3$, weight gap $\Delta$, and weight change curvature $\lambda$, under this setup. To ensure generalizability, we apply the same set of hyperparameters across all 6 datasets. We will include further details on our tuning process in our next version.
[1] Pitfalls of Graph Neural Network Evaluation, Shchur et al., arXiv:1811.05868, 2018.
**Response to Other Weakness \#1**: We thank you for raising this point. We would like to clarify that ensuring even class distribution for the initial query is a widely accepted common practice in the field, originating from fundamental contributions to GNN research [1] and continuously adopted by subsequent works we use as benchmarks [2][3]. In practice, it is reasonable to assume that the adversaries can attain partial knowledge of the class distribution through domain expertise in fields like *Drug-Target Interaction* or external sources such as *accounts outside the target network*. They will focus on obtaining such knowledge if they believe it brings them long-term monetary or strategic advantages based on a high-fidelity extracted model.
[1] Revisiting Semi-supervised Learning with Graph Embeddings, Yang et al., ICML 2016.
[2] Active Learning for Graph Embedding, Cai et al., arXiv:1705.05085, 2017.
[3] GRAIN: Improving Data Efficiency of Graph Neural Networks via Diversified Influence Maximization, Zhang et al., VLDB 2021.
**Response to Other Weakness \#2**: We thank you for pointing this out. Our ablation study setup stated in **Section 4.4** that removes the contribution of one out of the three modules, namely *representativeness*, *uncertainty*, and *diversity*, follows the settings of the most recent works on GNN node classification tasks [1]. In conclusion,
1. CEGA outperforms nearly all the ablated models in most datasets, specifically models with only (1) a combination of only representativeness and diversity and (2) a combination of only uncertainty and diversity.
2. CEGA provides more stable estimates across most datasets and metrics compared to the model that combines only representativeness and uncertainty.
[1] Semantic GNN with Multi-Measure Learning for Semi-Supervised Classification, Lin et al., EAAI 2025. | Summary: The paper proposes a method for model extraction attack, in the setting where the number of prediction queries is extremely tight.
Node predictions are queries in different rounds and based on three criteria: representativeness, uncertainty, and diversity.
For the entropy-based approach, time and space complexities are computed. The experiments show that the method is effective in marginally improving the metrics.
Claims And Evidence: The improvements are often marginal, and sometimes significant.
Not included in score: In Tab. 1, in the 1st column the proposed method's metric is mistakenly in bold font, please correct it.
Methods And Evaluation Criteria: I have questions and doubts about the discussion of Sec. "History-Based Analysis for Uncertainty".
The variable $\tao$ is generated from a normal distribution, which are added to the node attributes. Why doing so is referred to as "permutation" in the manuscript?
In summary, although the paper shows that the 3 introduced criteria lead to marginal improvement, I'm not sure if the contribution is sufficient for an ICML paper. Moreover, most improvements are marginal.
Theoretical Claims: The computation and space complexity of the entropy-based approach is computed and is presented as a theorem.
Although the time/space complexity is important, but in my opinion it shouldn't be presented as a theorem nor does it count as a contribution.
Experimental Designs Or Analyses: The approach is empirically validated.
Supplementary Material: -
Relation To Broader Scientific Literature: -
Essential References Not Discussed: -
Other Strengths And Weaknesses: The writing and notation were ambiguous to me throughout the paper.
Other Comments Or Suggestions: -
Questions For Authors: -
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We provide a point-to-point reply below to address the mentioned questions and concerns.
> **Reviewer**: The improvements are often marginal, and sometimes significant. I'm not sure if the contribution is sufficient for an ICML paper. Moreover, most improvements are marginal.
**Authors**: We thank you for raising this concern. We highlight the primary technical contribution of CEGA as achieving a highly challenging task of achieving strong and consistent performance in terms of saving query nodes while achieving high-fidelity model extraction. Such an advantage is especially evident in **Amazon_Computer**, **Cora_Full**, and **DBLP**, as shown in the trajectories at **Figure 1**. In contrast, AGE and GRAIN both show limitations on instability, with AGE underperforming on **Amazon_Photo** and **Cora_Full**, and GRAIN struggling on **Coauthor_CS**, **Amazon_Computer**, and **DBLP**. As a remark, many published papers in related fields have not achieved significantly superior performance across all the tested datasets, such as [1][2][3], with some underperforming against benchmarks by a larger margin than how CEGA has performed. We believe that CEGA makes a meaningful contribution to the further advancement of our community by introducing a practical and underexplored problem supported by comprehensive and stable empirical results. We are confident that our work can guide future efforts in research question formulation, method development, and theoretical analysis in the area of GNN security and democratization.
[1] Graph Attention Networks, Veličković et al., ICLR 2018.
[2] GNN-FiLM: Graph Neural Networks with Feature-wise Linear Modulation, Brockschmidt, ICML 2020.
[3] Directional Message Passing for Molecular Graphs, Gasteiger et al., ICLR 2020.
> **Reviewer**: In Tab. 1, in the 1st column the proposed method's metric is mistakenly in bold font, please correct it.
**Authors**: We appreciate your attention to detail. We acknowledge and appreciate your suggestion to apply bold fonts only to the best performance for each task and will synchronize the new standard in the next version of our manuscript. We hope that our commitment addresses your concerns about the clarity of our work.
> **Reviewer**: I have questions and doubts about the discussion of Sec. "History-Based Analysis for Uncertainty". The variable
is generated from a normal distribution, which are added to the node attributes. Why doing so is referred to as "permutation" in the manuscript?
**Authors**: We thank you for pointing this out. We agree that the term *permutation* is not precise enough. In the next version of our manuscript, we will replace this terminology with *perturbation*, which more accurately describes the application of a random Gaussian oscillation to the node attributes. This procedure allows us to evaluate the nodes' uncertainty regarding classification labels evaluated by the history-inspired interim model $f_{\mathrm{T}}$ under attribute variation. We hope our committed edits will address your concern about the clarity of our terminology.
> **Reviewer**: Although the time/space complexity is important, but in my opinion it shouldn't be presented as a theorem nor does it count as a contribution.
**Authors**: We appreciate your feedback and understand your concern about presenting CEGA's time and space complexity as a standalone theorem. In our next version, we will no longer treat the results of our computation on CEGA's time/space complexity as a theorem. We hope our committed edits will address your concern about the arrangement of contents covered in the manuscript.
> **Reviewer**: The writing and notation were ambiguous to me throughout the paper.
**Authors**: We thank you for the feedback regarding the writing and notation of our manuscript. We are eager to know about the specific locations that make you feel the ambiguity in our writing and notation hinders the understanding of our work. We are open to further discussion on notation consistency and the overall clarity of our current manuscript, and we hope that more specific feedback from you in this aspect will further improve the quality of our work.
---
Rebuttal Comment 1.1:
Comment: Thanks for the explanations. I increased my score accordingly. | Summary: This paper explores the vulnerability of Graph Neural Networks (GNNs) to Model Extraction Attacks (MEAs), particularly in scenarios with limited query budgets and initialization nodes. The authors propose a novel node querying strategy that incrementally refines the selection of nodes over multiple learning cycles, using historical information to enhance extraction efficiency.The paper makes three main contributions: it introduces a novel problem formulation for Model Extraction Attacks (MEAs) against GNNs with limited query budgets, focusing on node-level graph learning tasks; proposes the CEGA framework, which identifies the most beneficial queries during training by considering representativeness, uncertainty, and diversity; and provides extensive experiments on real-world datasets, demonstrating that CEGA outperforms existing methods in terms of fidelity, utility, and key performance metrics.
Claims And Evidence: This article claims that although graph neural networks (GNNs) have demonstrated superior performance in many applications, they are vulnerable to model extraction attacks (MEAs) in machine learning as a service (MLaaS) environments.Attackers can steal the functionality of GNN models by strategically querying the target model, thereby replicating high fidelity models. Such attacks can lead to serious consequences, such as copyright and patent infringement, especially in the pharmaceutical industry where GNNs are widely used for predicting drug molecule target interactions. If these GNN models are extracted, it may threaten the trade secrets of pharmaceutical companies, leading to unauthorized redistribution and unfair competition, ultimately causing significant financial and reputational losses.
Methods And Evaluation Criteria: The article used six commonly used graph datasets (such as Coauthor CS, Amazon Computer, Cora Full, DBLP, etc.), which cover various application scenarios from collaborative networks, product recommendations to academic citations. Tested on different datasets, CEGA demonstrated strong adaptability and robustness in various situations.The node selection strategy proposed by CEGA considers the representativeness, uncertainty, and diversity of nodes, thereby avoiding excessive concentration on certain nodes。
Theoretical Claims: Theorem 3.1 analyzes the temporal and spatial complexity of the CEGA method. The theorem states that the entropy calculation of the CEGA method introduces a time complexity of O (CN+N log N) and a space complexity of O (CN), where C is the number of categories and N is the number of nodes in the graph. This is based on the node selection and graph propagation process in the CEGA framework, and derives the time and space complexity.
Theorem 3.2 raises the issue of the existence of disturbance intensity ϵ. Specifically, when measuring the uncertainty of candidate nodes, there exists an appropriate disturbance intensity ϵ that ensures the stability of the model is not compromised after disturbance. This means that the CEGA method can ensure the effectiveness of the model when considering node disturbances, while not introducing excessive errors.
Experimental Designs Or Analyses: The experiment used six common graph datasets (Coauthor CS, Amazon Computer, Cora Full, DBLP, etc.). These datasets cover various application scenarios, from collaborative networks to product recommendation systems, with broad representativeness.By comparing with benchmark methods such as Random, GRAIN (an active learning method based on neural networks), and AGE (another active learning method), the superiority of the CEGA method has been verified. The article evaluated the performance of various methods under different query constraints by testing different query budgets (from 2C to 20C nodes).The article also conducted ablation experiments to analyze the contribution of each module to overall performance by removing certain evaluation modules of CEGA, such as structural centrality, uncertainty, and diversity.
Supplementary Material: Appendix B: Contains mathematical proofs for complexity analysis and the existence of perturbation strength ϵ.
Appendix C. 1 provides statistical information on the six graph datasets used in the article, including the number of nodes, edges, and specific features of each dataset;Appendix C.2 provides a detailed list of all hyperparameter settings used in the experiments presented in the article. This includes the configuration of CEGA methods and benchmark methods; Appendix C.4 presents a detailed result chart of the ablation experiment, comparing the performance of CEGA under different module removal conditions.
Relation To Broader Scientific Literature: One of the biggest innovations of the article is the proposal of the budget constrained MEA problem. Most existing research has overlooked budget constraints in practical applications, while CEGA proposes a more practical attack method by introducing limitations on the number of queries and query nodes per round.Current research mainly focuses on the overall information or local structure of the graph, while CEGA introduces a comprehensive consideration of historical information and structural centrality, making attacks more efficient on actual complex graph structures.
Essential References Not Discussed: No
Other Strengths And Weaknesses: originality:The paper proposes the CEGA (Cost Efficient Graph Attack) framework, which combines budget constraints and structural complexity, providing new ideas for efficient and low-cost model extraction in practical applications. This method has high originality in GNN security research.In addition, the paper innovatively improves attack efficiency by introducing multiple node selection criteria (representativeness, uncertainty, and diversity).
significance:The paper provides a meaningful framework for conducting GNN model extraction attacks on service (MLaaS) platforms, which can help protect systems using GNN in high-risk areas such as drug discovery, financial fraud detection, and healthcare from the threat of model leakage.
clarity:The structure of the paper is clear, first introducing the potential threats and challenges of GNN in practical applications, and then elaborating on the design principles, methods, and experimental evaluation of the CEGA framework in detail. The experimental part fully validated the effectiveness of the method and analyzed the contribution of each module through ablation experiments, ensuring the verifiability and reliability of the method.
Other Comments Or Suggestions: No
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your comprehensive and thoughtful feedback towards the CEGA work. We sincerely appreciate the time and expertise you dedicated to carefully reviewing our manuscript. We would like to provide some further insights to our paper regarding your comments to fully address the potential concerns.
> **Reviewer**: The node selection strategy proposed by CEGA considers the representativeness, uncertainty, and diversity of nodes, thereby avoiding excessive concentration on certain nodes.
**Authors**: We thank you for highlighting the key understanding of CEGA. The major motivation for us to impose diversity in CEGA's node selection strategy is precisely to prevent over-concentration on particular subsets of the graph, which often consist of nodes sharing the same label. Avoiding such concentration patterns not only improves the fidelity of extracted model on classifying nodes belonging to underrepresented label categories, but also enhances the stability of CEGA across different initialization and expected knowledge from the adversaries side.
> **Reviewer**: Theorem 3.1 analyzes the temporal and spatial complexity of the CEGA method.
**Authors**: We appreciate your correct understanding of the theoretical complexity analysis. As raised by Reviewers oxao and JNMU, we acknowledge the necessity to make a clearer evaluation of both the theoretical and practical computation complexity of CEGA. In the next version of our manuscript, we will reorganize the presentation of Theorem 3.1 and related discussion in real-world MLaaS setup. We are open to further suggestions with you on refining the study of the computation complexity of CEGA.
> **Reviewer**: The experiment used six common graph datasets. By comparing with benchmark methods such as Random, GRAIN, and AGE, the superiority of the CEGA method has been verified. The article evaluated the performance of various methods under different query constraints by testing different query budgets from 2C to 20C nodes. The article also conducted ablation experiments to analyze the contribution of each module to overall performance by removing certain evaluation modules of CEGA.
**Authors**: We thank you for your detailed and accurate summary of our experimental setup. Your expertise-based recognition of our effort to maintain consistency with widely accepted common practice in the respective field is highly appreciated. As discussed with Reviewers oxao and JNMU, our evaluation step is carefully designed to align with standards adopted throughout GNN-based MEA literature. We will prepare further clear justification on the CEGA evaluation setup as suggested by Reviewers oxao and JNMU, and we are open to incorporating any additional suggestion you may have that could enhance the comprehensiveness of our evaluation.
> **Reviewer**: Most existing research has overlooked budget constraints in practical applications, while CEGA proposes a more practical attack method by introducing limitations on the number of queries and query nodes per round. Current research mainly focuses on the overall information or local structure of the graph, while CEGA introduces a comprehensive consideration of historical information and structural centrality, making attacks more efficient on actual complex graph structures.
**Authors**: We thank you for the effort you put into understanding CEGA's main contributions correctly. After a careful review of prior literature we cited in our paper, your comment captures our intent to fix a critical gap that remains in the GNN-based MEA literature by explicitly addressing realistic query budget constraints based on a history-based query-by-learning scheme while considering GNN-specific potential obstacles such as complexity on graph structure. As also noted in our discussion with Reviewer oxao, achieving consistently high performance across diverse datasets under such constraints remains a highly meaningful attempt for our community, and we are glad that your feedback naturally reveals your gentle but firm acknowledgment of such contribution.
**Overall Responses by Authors**: We thank you once again for your thoughtful engagement and positive comments to our paper. We are fully committed to making the paper more solid and comprehensive in the next version as per the feedback received from Reviewers oxao and JNMU. We welcome any further discussion with you and will respond promptly to any of your follow-up questions. | null | null | null | null | null | null | null | null |
Measuring Variable Importance in Heterogeneous Treatment Effects with Confidence | Accept (poster) | Summary: This paper tackles measuring variable importance in conditional average treatment effect (CATE) functions. One of the few current approaches consists in applying the LOCO method to CATE estimation ; as the CPI method is an alternative to LOCO, authors propose applying CPI to CATE estimation. They prove the consistency of both CPI and LOCO to the total Sobol index for a specific risk function, where the variance of CPI is driven by its less error-prone missing covariate estimation procedure. They empirically show that CPI leads to faster convergence and lower variance than LOCO on CATE variable importance measure problems.
(EDIT : updated my score following the rebuttal)
Claims And Evidence: Overall, the method seems well-justified ; I'll just have a slight concern for Assumption 3.1 that looks like the backbone of the method and its advantages and that I find lacks a bit of justification. Does it come from the past literature on CPI? How is it generally justified?
Methods And Evaluation Criteria: Yes
Theoretical Claims: I did quickly and did not find any issues
Experimental Designs Or Analyses: Yes, and everything looks good
Supplementary Material: Parts A.2 to A.10
Relation To Broader Scientific Literature: As the LOCO method was applied to CATE estimation, and the CPI method was not, the submission fills this gap. It does justify theoretically and experimentally the use of CPI compared to LOCO.
Essential References Not Discussed: Not to my knowledge
Other Strengths And Weaknesses: The paper is original in the sense that it is a combination of existing ideas ; it might be seen as a relatively low-hanging fruit but the theoretical and experimental evaluations done by authors increase the significance of their work. The paper is generally clear, except on the point with Assumption 3.1 above and the total Sobol index, which seems to a critical notion of the paper (see questions below)
Other Comments Or Suggestions: I strongly suggest you explain in greater depth the total Sobol index, eg where it comes from and why it is very appropriate for variable importance. It seems like a critical quantity in your work, as the ground-truth measure to which either CPI and LOCO is expected to converge. You might explain why it is to be taken as a ground-truth.
I do not see any typos.
Questions For Authors: As said previously, can you please give details (literature + relevance) for :
1) Assumption 3.1
2) The total Sobol index?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer. The comments and questions are relevant and will help us improve the paper.
# Discussion of assumption 3.1
- Assumption 3.1, that a given covariate can be decomposed as a function of the other covariates plus an additive independent noise term is quite common in the literature. It was especially discussed in (Candlès et al., Panning for Gold, 2017), introducing the model-x knockoff framework. In section 1.3, the authors write, "assuming we know everything about the covariate distribution but nothing about the conditional distribution $Y |X1, . . . , Xp$". We discuss the relevance of this assumption in remark 3.3 of the main paper. It is motivated by plausible scenarios where, for instance, the relationship between the covariates is much simpler than the relationship between the covariates and the outcome. For example, in genetics, the relationship between single nucleotide polymorphisms (SNPs) is easier to model than the relationship between SNPs and a disease. Also, when a large number of unsupervised data is available, learning the conditional distribution of the covariates can be easier than learning the outcome, for which only a few labeled examples are available. Finally, a temporal aspect may also be considered, where the covariates are observed simultaneously while the outcome is observed later.
- This assumption was also made in previous published work on CPI (Chamma et al, 2023, NeurIPS), which also revealed that in the context of prediction tasks, CPI was a robust method for variable importance estimation.
- We also refer the reviewer to the response provided to Reviewer jT2T, section "Complexity of the conditional distribution estimation." We clarify that the IHDP benchmark uses real-world data, where this assumption cannot be verified. And present an additional simulation experiment where the dependency structure of the covariates is made more complex, using a latent variable model. PermuCATE still shows a lower variance than LOCO in this scenario.
# Description of the total Sobol index
- The total Sobol index is a well-studied quantity (see, for instance, [1, 2, 3, 4]) that measures the influence of a variable or group of variables on the output of a (possibly complex) model. For a model $\mu$ predicting an outcome $y$ given covariates $X$, the total Sobol index of a variable $j$ (or group) is given by: $$\Psi^* = \mathbb{E}[\mathbb{V}[\mu(X) | X^{-j}]] = \mathbb{E}[(y-\mu_{-j}(X^{-j}))^2]-\mathbb{E}[(y-\mu(X))^2],$$
where $\mu_{-j}$ is the model where the variable $j$ is removed.
- By writing the total Sobol index as a loss difference, we can see that it measures the loss increase when removing the variable $j$ from the model, which provides a measure of the importance of the variable. It can also be seen as an unnormalized generalized ANOVA (difference of $R^2$) [5]:
$$\Psi^* = \mathbb{V}(y)\left[\left[1-\frac{\mathbb{E}[(y-\mu(X))^2]}{\mathbb{V}(y)}\right]-\left[1-\frac{\mathbb{E}[(y-\mu_{-j}(X^{-j}))^2]}{\mathbb{V}(y)}\right]\right] .$$
- While this is true for predictive models, extending this definition to CATE estimation is not straightforward due to the fundamental problem of causal inference, the ground truth CATE is not observable (in clinical trials, patients are either in the treatment or control group, not both).
- It can be seen from the above equation that LOCO is a plug-in estimator of the total Sobol index, where the model $\mu$ is replaced its finite sample estimate $\hat{\mu}$. We show in the paper why this plug-in estimator is not the best choice for estimating the total Sobol index, especially in finite sample.
We will integrate this intuitive presentation of the total Sobol index in the final version of the paper.
- [1] Sobol, IM. "Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates." Mathematics and computers in simulation, 2001
- [2] Bénard, C. et al. Mean decrease accuracy for random forests: inconsistency, and a practical solution via the Sobol-MDA, Biometrika, 2022
- [3] Hooker, G. et al. Unrestricted permutation forces extrapolation: variable importance requires at least one more model, or there is no free variable importance. Stat Comput, 2021
- [4] Williamson, B. et al. A General Framework for Inference on Algorithm-Agnostic Variable Importance. Journal of the American Statistical Association, 2022
- [5] Williamson, B et al. Nonparametric variable importance assessment using machine learning techniques. Biometrics, 2021
---
Rebuttal Comment 1.1:
Comment: Many thanks, this addresses my concerns. I will update my score to a clear Accept. | Summary: The submission describe a variable importance method (PermuCATE) generalizing CPI (Chamma et al. 2023),
a theoretical analysis of PermuCATE is performed showing the behaviour of the estimator in finite sample.
Extensive experiments are implemented over a variety of datasets comparing PermuCATE against LOCO.
Claims And Evidence: Yes
Methods And Evaluation Criteria: yes they make sense
Theoretical Claims: no i did not check proofs in the appendix
Experimental Designs Or Analyses: yes the experimental design is appropriate and sound
Supplementary Material: no
Relation To Broader Scientific Literature: the method is an extension of previous work (Chamma et al. 2023, Verdinelli & Wasserman, 2023 and Hines et al., 2022).
The basic references of this work are all preprints,
Essential References Not Discussed: I am not aware of previous references not discussed.
Other Strengths And Weaknesses: while the paper is well written in general I fell it should provide a better description and connection between CATE and variable importance.
For instance Verdinelli & Wasserman, 2023 make a distinction between 3 types of variables importance: population (how important is age in a regression function?), algorithmic (how much did age affect the estimated value of the regression) and causal (ow would the outcome have
changed if Mary had been 5 years younger).
The present submission deals with the causal intepretation, being it the CATE. While Verdinelli & Wasserman, (2023) focus on the population case. This is justified by the authors of this submission citing Hines et al., 2022 (which is a preprint not published yet?), I skimmed Hines et al., 2022 but I could not pinpoint where the connection can be made.
Meaning, how feature importance of the CATE tell us something about effect modification or differences in the causal effect? maybe this is naive but I have intuitive feeling that is not the same.
Other Comments Or Suggestions: no
Questions For Authors: For instance Verdinelli & Wasserman, 2023 make a distinction between 3 types of variables importance: population (how important is age in a regression function?), algorithmic (how much did age affect the estimated value of the regression) and causal (ow would the outcome have
changed if Mary had been 5 years younger).
The present submission deals with the causal intepretation, being it the CATE. While Verdinelli & Wasserman, (2023) focus on the population case. This is justified by the authors of this submission citing Hines et al., 2022 (which is a preprint not published yet?), I skimmed Hines et al., 2022 but I could not pinpoint where the connection can be made.
Meaning, how feature importance of the CATE tell us something about effect modification or differences in the causal effect? maybe this is naive but I have intuitive feeling that is not the same.
Moreover it is not completely clear to me the problem setting, especially the goal of this? the objective is to obtain causal estimates on effect modification?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer. The comments and questions are relevant and will help us improve the paper.
# Previously published references
- We would like to highlight that important references for this work have been published. Specifically, Chamma et al. 2023 in NeurIPS and Verdinelli & Wasserman, 2023 in Statistical Science. We will update the bibliography to include the final published version of the Verdinelli & Wasserman paper instead of the preprint.
# Connection between CATE and variable importance
- Following the distinction made by Verdinelli & Wasserman, 2023, our work focuses on population-level variable importance. Estimating how important a particular feature is in the underlying data-generating process $\mu$ is done by estimating the total Sobol index using the LOCO procedure. For a variable $j$, $\Psi_j^* = \mathcal{L}(y, \mu{-j}(X^{-j})) - \mathcal{L}(y, \mu(X))$, where $\mu_{-j}$ is a sub-model estimated without the variable $j$.
- In the present work, the focus on causality is motivated by the type of problem we are interested in: interventional data where patients are assigned to treatment or control groups. In Verdinelli & Wasserman's vocabulary, the underlying data-generating process we wish to study is the individual treatment effect, also known as the conditional average treatment effect (CATE), $\tau(x) = \mathbb{E}[Y(1) - Y(0) | X=x]$.
- Extending the approach from Verdinelli & Wasserman to CATE estimation is not straightforward because the ground truth CATE is not observable — patients are either in the treatment or control group, not both (the fundamental problem of causal inference). In our main paper, we show in Equations 1 and 4 that instead of the loss difference used in Verdinelli & Wasserman, a difference of feasible causal risks can be used to consistently estimate the total Sobol index for the CATE.
# Motivation and problem setting
- The motivation for this approach is to understand, at the population level, what drives heterogeneity in the treatment effect. For example, in a clinical setting, a given treatment might lead to adverse events in a subset of patients (e.g., those with a specific genetic profile or lifestyle ...). The proposed approach would then allow us to identify the risk factors (e.g., genetic profiles or lifestyle ...) associated with a higher risk of adverse events when treated.
We appreciate the reviewer's comment and will include this clarification and discussion in the terms of Verdinelli & Wasserman in the final version of the paper. | Summary: The paper proposes a new explainability method for the conditional average treatment effect (CATE) estimators, namely, PermuCATE. PermuCATE measures global features importances and is based on a conditional permutation importance (CPI) methods. The authors demonstrated that PermuCATE aims to estimate the expected conditional variance of CATE, given the subset of covariates (total Sobol index). Also, they showed, that under additional assumptions, PermuCATE achieves lower variance than the existing leave-one-covariate-out (LOCO) method. Ultimately, the authors empirically demonstrate the superiority of their method in terms of statistical power and precision.
**Post-rebuttal response**: I thank the authors for addressing the majority of my concerns. Still, I have two remaining concerns: (1) the presentation and (2) the efficiency of the method.
1. The theoretical results (namely, Propositions 3.4 and 3.5) need to be carefully revisited, especially, considering the large number of corrections and edits during the rebuttal. Here, I also mean streamlining the notation.
2. I read the latest authors' response and now it looks like the proposed method is asymptotically strictly worse than the existing LOCO: the proposed method contains the error of the first order wrt. the estimated $\hat{\tau}$ whereas the LOCO has the same error squared (= hence, less sensitive to the estimation of $\tau$). Therefore, I don't see how the method with the apriori asymptotically worse performance can be preferred in practice given that we cannot reliably choose or validate Sobol index predictions simply from observational data.
Hence, I keep my current score and encourage authors to further adjust their method so that it achieves the lowest variance (= semi-parametric efficiency bound).
Claims And Evidence: The major claims are supported by proofs and the authors provided an extensive empirical evaluation of their method.
However, I have several major concerns regarding the claimed properties of PermuCATE:
1. I find it hard to believe that the method achieves lower variance than LOCO, given a semi-parametrically efficient variant of LOCO [1] offers the asymptotically efficient estimator of the target total Sobol index. I encourage the authors to provide a wider discussion on this (also, take into consideration issue 1 in Theoretical Claims, as it seems that Prop. 3.4 is missing an error term).
2. The residuals for conditional perturbations of covariates are estimated on the same subset (D_test) that is being used to estimate the global variable importance. Doesn’t this introduce estimation bias or compromise statistical tests for variable significance?
If all of the issues from above and below could be resolved, I would be happy to raise my score.
References:
- [1] Hines, O., Diaz-Ordaz, K., and Vansteelandt, S. Variable importance measures for heterogeneous causal effects. arXiv, 2022. doi: 10.48550/arxiv.2204.06030.
Methods And Evaluation Criteria: The chosen baseline methods are relevant and make sense. However, the authors did not discuss whether the chosen benchmarks satisfy Assumption 3.1 needed for the method’s consistency. Thus, I would like to see a clear separation between the benchmarks that do and do not satisfy Assumption 3.1.
Theoretical Claims: I found several inaccuracies in the theoretical statements:
1. In Prop. 3.4, it is hard to believe that the bias of estimating the total Sobol index does not depend on the error of estimating the ground truth CATE, namely, $O_P(\lVert \tau - \hat{\tau} \rVert^2)$. Upon checking the proof in the Appendix, it seems that indeed, the authors missed this term while transitioning between the proofs of Prop. 3.2 and 3.4.
2. I don’t think the additive noise assumption is necessary for the outcome, as introduced in Appendix A.2 (line 575). The general DR-/R-learners do not make such an assumption [1].
References:
- [1] Morzywolek, Pawel, Johan Decruyenaere, and Stijn Vansteelandt. "On a general class of orthogonal learners for the estimation of heterogeneous treatment effects." arXiv preprint arXiv:2303.12687 (2023).
Experimental Designs Or Analyses: See Methods And Evaluation Criteria.
Supplementary Material: I checked the part of the appendix relevant to the concerns that I had.
Relation To Broader Scientific Literature: The paper suggests a lower variance alternative to the existing CATE variable importance method, LOCO [1]. Yet, this comes at a cost of the additional assumption (Assumption 3.1), namely the additivity of noise for conditional distributions of covariates. In my opinion, this is a pretty strong assumption (although the authors argued differently in Remark 3.3), which limits the application of PermuCATE in practice.
References:
- [1] Hines, O., Diaz-Ordaz, K., and Vansteelandt, S. Variable importance measures for heterogeneous causal effects. arXiv, 2022. doi: 10.48550/arxiv.2204.06030.
Essential References Not Discussed: To the best of my knowledge, all the most important works were discussed here.
Other Strengths And Weaknesses: Nothing new here.
Other Comments Or Suggestions: - Did you mean “pseudo-outcomes” instead of “potential outcomes” in line 152?
- I would be more precise in lines 114-115 (2nd column) by defining what “comparable risks” mean. It is important to mention, that both the DR-learner risk (=PO-risk) and R-learner risk yield the same CATE estimators **only** in population and when the ground-truth nuisance functions are used [1].
- I had a hard time distinguishing the setting and related work. I suggest the authors split Sec. 2 into two parts for better readability.
- Some notation is not properly defined or introduced (e.g., $\beta_j$ in Eq. 6 or $\Psi_{\text{LOCO}}$ in Eq. 8.
References:
- [1] Morzywolek, Pawel, Johan Decruyenaere, and Stijn Vansteelandt. "On a general class of orthogonal learners for the estimation of heterogeneous treatment effects." arXiv preprint arXiv:2303.12687 (2023).
Questions For Authors: 1. Is there a reason why the risk difference (line 13 of Alg. 1) is divided by 2? It seems like this only complicates the derivations.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thorough review. The comments and questions are relevant and will help us improve the paper.
# Dependence of PermuCATE on the estimate of $\tau$
- To clarify the dependence of PermuCATE on the estimation of $\tau$, we first provide some intuition: LOCO and PermuCATE quantities are estimated by computing a difference between two risks (eq 1 and 4). Contrary to LOCO, PermuCATE uses the same estimator ($\hat{\tau}$) in both terms of the difference, leading to estimation terms that cancel each other out. This intuition is demonstrated in Supplement A.4. Specifically, in L715, the second-order $\tau$ estimation term cancels out using that $\\mathbb{E}[(\tau (X_{P,j})-\hat{\tau}(X_{P,j}))^2|\mathcal{D_{train}}]=\mathbb{E}[(\tau (X)-\hat{\tau}(X))^2|\mathcal{D_{train}}]$ because by construction $X_{P,j}\overset{i.i.d.}{\sim}X$.
- The only term involving the estimation error is term C (L697), a first-order error term: $O(\tau-\hat\tau)$. We initially argued that it is negligible compared to second-order terms and proposed to ignore it. However, we agree that clarifying the dependence on the estimation error of $\tau$ is important and will rephrase proposition 3.4 accordingly.
- The Lipschitz assumption aimed to show that in equation 9, the bias term carries the error in estimating the conditional distribution $\nu_j$. As confirmed by experiments, this assumption is not restrictive.
To clarify these last two points, we propose to rephrase the proposition using the intermediate step (eq 9) of the proof in A.4.
## Proposition 3.4:
Under Assumption 3.1, for a consistent CATE estimator $\hat{\tau}$, the finite sample estimation of the importance for the $j^{\text{th}}$ covariate is:
$$\frac{1}{2}\mathbb{E}[\widehat{\Psi_{\mathrm{CPI}}^j}|\mathcal{D_{train}}] - \Psi_j^* = \mathbb{E}[(\hat{\tau}(X_{P,j}) - \hat{\tau}(\widehat{X_{P,j}}))^2| \mathcal{D_{train}}] + O_P(\tau - \hat{\tau})$$
**Remark**: While the first term involves the CATE estimate $\hat{\tau}$, it does not involve the estimation error.
## Corollary:
Under the additional assumption that $\hat{\tau}$ is Lipschitz continuous and $\hat\nu$ is consistent, the right term becomes $$O_P(||\nu_{j} - \widehat{\nu}_{j}||^2) + O_P(\tau - \hat{\tau})$$
# Variance comparison between LOCO and PermuCATE
- Indeed, LOCO is asymptotically efficient. However, please note that this convergence rate explains its asymptotic behavior and relies on the convergence rates of the models used. In finite samples, the estimation error of these models drives the variance of PermuCATE and LOCO. As shown in eq. 5 and 7, PermuCATE has a smaller dependence on the complex model $\tau$ and, therefore, a smaller finite sample variance. This was experimentally confirmed in Figure 2 and the additional simulation study mentioned below.
# Subset used for conditional perturbations
- The goal of the conditional perturbation is to sample from the conditional distribution $X_{P,j} \sim p(X^j|X^{-j})$. Under assumption 3.1, each covariate can be decomposed as a function of the other covariates plus an additive noise term $X^j =\nu_j(X^{-j}) + \epsilon_j$. As presented in algorithm 1, the function $\nu_j$ is estimated on the training data. This avoids overfitting, which could lead to perfectly predicting $X^j$, not perturbing the data and leading to vanishing importance. Then, to sample the additive noise term, we use permutations of the residuals: $\epsilon_j = \mathrm{shuffle(X^j - \nu_j(X^{-j}))}$. We would like to insist that no parameter estimation is involved in the permutation of the residuals and, therefore, no information leakage from the test set. This also maintains scikit-learn API compatibility by using the `fit` method (with training data) to estimate $\nu_j$, and the `score` method (with test data) to sample perturbations.
# Benchmarks that do and do not satisfy Assumption 3.1
- The simulation scenarios LD, HL, and HP (see datasets in the main paper) used multivariate Gaussians, which satisfied this assumption.
- We refer the reviewer to the response provided to Reviewer jT2T, section "Complexity of the conditional distribution estimation." We clarify that the IHDP benchmark uses real-world data, where this assumption cannot be verified. Furthermore, we provide an additional experiment where the dependency structure of the covariates is more complex than that of multivariate Gaussians. In both cases, PermuCATE outperforms LOCO, supporting the validity of assumption 3.1.
- We also refer to the section "Discussion of model-x knockoff" in response to Reviewer jT2T, where we discuss that this assumption is common in the literature and review references. This assumption ensures valid conditional sampling from $p(X^j|X^{-j})$. However, any other sampling method could be used as a drop-in replacement.
# Factor 2 in risk difference
- As demonstrated in Proposition 3.2, PermuCATE estimates the total Sobol index up to a factor of 2.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response and for correcting some of the mentioned issues.
I have a follow-up question regarding "Dependence of PermuCATE on the estimate of $\tau$". Now as you incorporated the error of estimating $\tau$ into the error of estimating the Sobol index, it looks like this error is of the same order. At the same, time this error term is squared for LOCO (according to Proposition 3.5), which suggests LOCO is more robust than the proposed method.
Also, my comments & suggestions remained unanswered. Thus, I tend to retain my current score.
---
Reply to Comment 1.1.1:
Comment: # Follow-up on the dependence of PermuCATE on the estimate of $\tau$
Thank you for following up. We would like to clarify some imprecisions in our rebuttal that may have led to confusion.
- Specifically, we would like to address the error term in PermuCATE, which comes from term C in L697 of the manuscript. This term is more precisely $O_P(\mathbb{E}_{X\sim D_test}[\tau(X) - \hat\tau (X)])$, where the expectation is taken over the test set and $\hat\tau$ is estimated on the training set.
- We want to clarify the terminology used. When we referred to the "first-order" term in PermuCATE, we meant the "linear term," which corresponds to the mean bias, using the linearity of the expectation. For example, in a linear model with centered (X), this term will always be 0. By "second-order," we meant the "quadratic term" to describe the MSE term in Proposition 3.5 for LOCO. When limited training data is available, this generalization error term is large, contributing to the increased variance of LOCO, as shown in Figure 2 (small (n) on the left of the x-axis).
- Moreover, we did not claim that PermuCATE achieves asymptotic semi-parametric efficiency. Our experiments show that in certain scenarios, the variance of LOCO decreases faster asymptotically (see Figures 2a and 2e), which can be attributed to the MSE convergence rate. Our analysis focuses on the finite (training) sample regime, motivated by the scarcity of interventional data. In this non-asymptotic context, these estimation error terms explain the larger variance observed with LOCO, particularly with misspecified models, such as Ridge in non-linear scenarios (Figure 2b) or deep learning models (Figures 2d and 2f).
To conclude, in regimes with limited training data, which are common in interventional studies, the MSE term in LOCO becomes particularly problematic, leading to higher variance. Conversely, PermuCATE exhibits greater robustness in such contexts. While asymptotically, the variance of LOCO may decrease faster, this behavior is noticeable in data regimes where both methods benefit from sufficient statistical power (see Figure 3).
We realize that our initial wording in the rebuttal did not accurately describe the proof provided in A4. We hope this clarification addresses the reviewer's concerns.
# Comments & suggestion
We apologize for not addressing all points in our initial rebuttal. The 5000-character limit forced us to make decisions and selectively address comments. We are glad to use the additional characters here to address all the points raised:
- **L152 correction**: We indeed meant to refer to pseudo-outcomes. We will correct this in the revised manuscript.
- **Equivalence of Risks**: We agree that the risks are equivalent in the population when using the true nuisance functions. In the main paper, we used the term "directly comparable" to refer to the result in Appendix A2, which shows that the oracle PEHE appears directly in the decomposition of the PO-risk, whereas it appears in a re-weighted form in the R-risk.
- **Related Work subsection**: To improve the readability of Section 2, we plan to add a clear separation and subsection title: Related Work.
- **Notation Consistency**: To clarify notation consistency, we will add a sentence to explain that $\beta_j$ refers to the $j^{th}$ coefficient of the vector $\beta$, while $\Psi_{LOCO}^j$ corresponds to the importance of the $j^{th}$ variable measured using the LOCO approach. | Summary: The paper proposes a variable importance measure (VIM) to understand the variables driving the conditional average treatment effect (CATE) function. The measure is based on the principle of conditional permutation importance where the variable of interest is permuted while keeping matching its conditional distribution and then checking the impact on CATE through a quantity known as total Sobol index. The work studies the estimation error, finite-sample variance, and type-I and II error of the proposed VIM to an existing measure named LOCO from Hines et al. 2022. Main contributions are to develop a natural permutation-based VIM and to show that it performs favorably in extensive experiments.
### update after rebuttal
I have read the response and other reviews. The response clarifies my concerns on interpretation of the Sobol index and performance of the method for more complex covariate distributions. Thanks for including an additional experiment.
However, the theoretical claims on estimation errors/variance for LOCO vs proposed method require substantive changes based on the discussion with Reviewer wUZa. Asymptotically, LOCO seems to be better whereas in finite data experiments proposed methods is consistently better. The authors provide an hypothesis for why finite sample performance could be better, namely, MSE error in tau is worse. I would suggest the authors to carefully check the hypothesis in experiments and thus provide an explanation grounded in their theoretical analysis.
More importantly, choosing between LOCO and proposed method is not straight-forward, a concern that has not been addressed. Authors should discuss tests for the assumption or procedures to choose between the two methods. More examples of settings where the data might favor LOCO will help.
Given the experimental results on the proposed method, I am positive that the work is significant. The theoretical justification and guidance for using the method could be improved. Hence, I raise my score to 3 weak accept.
Claims And Evidence: - Claims on inferential properties of the VIM are demonstrated convincingly by experiments.
- Claims on benefits of the proposed method are not very rigorous. The comparison of bias between LOCO and PermuCATE in eps (5) and (7) is a bit misleading. Proposition 3.4 hides the fact that PermuCATE also depends on how good is the CATE estimate tau since it is assumed to be consistent and Lipschitz. Therefore, both methods are susceptible to estimation errors in tau. The eps (5) and (7) hide dependence on sample sizes and assume consistency for tau. Therefore, it is misleading to say they are finite sample analyses. Relatedly, eq (7) does not clearly point to a worse dependence on dimension of covariates. The claim in line 342 is not supported by the theoretical results. Both (5) and (7) will depend on dimension.
Methods And Evaluation Criteria: - The permutation-based variable importance is a natural concept and the method skillfully develops it for CATE.
- Experiment setup including the metrics and baselines are relevant.
Theoretical Claims: I read proof for Proposition 3.2 carefully and skimmed the proofs for Proposition 3.4 and 3.5. For 3.2, the cited result from Reyero Lobo et al. 2025 shows convergence in Wasserstein which is written as an exact equality in the proof. I think the main result is ok, but the statement should be written carefully.
Experimental Designs Or Analyses: - I checked the design for synthetic experiments which control for complexity of conditional outcome function.
- The complexity of conditional covariate density function is not tested adequately. Since Assumption 3.1 on covariate density is important to the results, I think the experiments need to vary complexity of density function as a way to check sensitivity to the assumption, including settings when the assumption is not true.
Supplementary Material: I skimmed the proofs of Propositions 3.2, 3.4, and 3.5.
Relation To Broader Scientific Literature: - The method is new and experimental comparison of finite-sample behavior against LOCO are impressive.
- Please discuss the relation to model-x knock off literature in more detail since the methods similarly model the covariate density. Does there exist CATE importance measure from the literature that are natural baselines?
Essential References Not Discussed: The paper can discuss other methods that are based on the principle of leave one covariate out such as Zhang and Janson 2020 and more papers on VIMs based on model-x knock off if any.
Lu Zhang, Lucas Janson. Floodgate: inference for model-free variable importance. arXiv 2020 https://arxiv.org/abs/2007.01283
Other Strengths And Weaknesses: # Strengths
- Method is conceptually simple and inherits the statistical inference guarantees on type-I error from prior results.
- Proposed method provides a useful alternative to existing variable importance measures by leaning more towards modeling the conditional covariate distribution instead of the conditional outcome distribution. Therefore, the method might be more suitable in some applications.
- Experimental validation is thorough. Authors rigorously estimate the importance measures by using flexible CATE-learners wrapped in ensemble methods like super learner and use cross-fitting to reduce variance. Throughout the results report confidence intervals or p-values over sufficiently many repeated samples of the data.
# Weaknesses
- It is unclear when to use LOCO vs PermuCATE since they have similar statistical inference properties but differ in assumptions. Consider discussing a test for checking Remark 3.3 that states that the conditional covariates are easier to model than conditional outcome function. Was it the case for IHDP dataset? The claim that modeling conditional covariate distributions are easier for some applications should be more carefully demonstrated or discussed through citing literature.
- The presentation can be improved. I felt that CPI could be explained in more detail before the methods to give the context for comparisons to LOCO. Please present the statistical guarantees for CPI. Please introduce total Sobol index and discuss why it is a good measure for CATE.
- Compared to existing method LOCO, the proposed method is not readily extensible to computing variable importance for variable subsets. The nuisance parameter \nu will be a challenging multivariate regression, whereas LOCO still requires a regression on univariate response. Computing importance for subsets is important to handle highly-correlated variables and to limit the computation since often variable subsets can be grouped together and treated as one variable.
- Experiments on IHDP data did not test whether variables are ranked in the true order. AUC only tests for ranking between important and not important variables. A more prevalent use of variable importance measures is to identify which of the important variables are the most important. Please consider reporting metrics like precision at k or Kendall tau.
Other Comments Or Suggestions: Minor comments, no response is requested for the following
- Some terms in the introduction were not introduced. Please describe what is the purpose of variable importance measures, in what way they help in interpretation, feasible risk, and what is the outcome or effect in CATE in the biomedical applications discussed in introduction. Explain how showing variable importance for CATE will help in biomedical applications.
- Please discuss CPI in some more detail in related work or introduction to make the reader appreciate its importance.
- Please explain the remark on total Sobol index in line 135.
- Consider labeling Eq. (3) as R(…)=…
- I really like the Figure 1. It conveys information on effect of sample size on VIM, comparison to true value, and hypothesis test results quite clearly.
- CPI and PermuCATE are used interchangeably, CPI in notation and PermuCATE in text. Please consider using the same name everywhere.
- Please define support in line 370. Does it mean variables used in the true outcome function?
- Please complete the sentences at the end of Remark 3.3. It is evident they support the claim, however, can be written more explicitly.
- Motivation for the method is from high-dimensional settings like SNPs data whereas the IHDP datasets in evaluation study has moderate number of covariates.
Typos
Line 066 an
Line 162 estimated
Line 333 pseudo
Questions For Authors: - Please discuss validity of the claims on variance of the two methods.
- Please intuitively describe total Sobol index.
- Please provide more evidence either from literature or IHDP data that covariate density is easier to model or suggest when to apply the proposed method.
- Please clarify whether the experiment setup checks Assumption 3.1 systematically.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thorough review. The comments and questions are relevant and will help us improve the paper.
# Dependence of PermuCATE on the estimate of $\tau$
- We refer the reviewer to the response provided to Reviewer wUZa, in the section "Dependence of PermuCATE on the estimate of $\tau$ ". We explain why second-order error terms cancel out in the estimation of PermuCATE, and propose to rephrase proposition 3.4 to include first-order error terms.
- We agree with the comment that eq 7 does not explicitly point to a worse dependence on the dimension. We plan on rephrasing the L342 to clarify that this is an empirical observation.
# Complexity of the conditional distribution estimation
- The IHDP benchmark uses real-world data, with non-Guassian continuous features and imbalanced categorical (up to 4 categories) features. Because the ground truth for the conditional distribution is not available, the validity of Assumption 3.1 cannot be assessed. However, Figure 4 shows that PermuCATE outperforms LOCO with better statistical power and AUC for detecting the important features while controlling the type-1 error on this dataset.
- In addition to this benchmark, we propose an additional experiment where we revisit the simulation study presented in Figure 2 also to vary the complexity of the conditional distributions. To do so we use simulations inspired by [1], we sample the covariates using a latent variable model. We first sample latent variables from mixtures of Gaussians and then generate observed covariates $X$ through a non-linear transformation with interaction terms of the latent variables. This results in non-Gaussian covariates with complex conditional distributions, illustrated in this figure: https://imgur.com/a/JqT04in In a), presents the marginal distributions, b) the kurtosis of generated variables (blue) compared with the kurtosis for Gaussians (orange) for comparison, and c) their correlation matrix. Similarly to Figure 2, we generate 2 CATE functions, one linear and one non-linear and compared the variance of PermuCATE and LOCO for three different models at varying sample sizes. The results shown in this figure: https://imgur.com/a/t0qhb61 reveal a lower variance for PermuCATE compared to LOCO similar to Figure 2. Our interpretation is that the complexity of covariates distributions also affects LOCO, by making the CATE harder to estimate.
[1] Hollmann et al. Accurate predictions on small data with a tabular foundation model. Nature 2025
# Description of the total Sobol index
- We refer the reviewer to the response provided to Reviewer VGy6, in the section "Description of the total Sobol index". We provide an intuitive presentation of the total Sobol index. We will integrate this presentation into the final version of the paper.
# Discussion of model-x knockoff
The proposed approach shares similarities with model-X knockoffs (KO) as both model the conditional distribution of covariates, assuming this task is easier than estimating the quantity of interest (e.g., CATE). This point is discussed in Remark 3.3 of the main paper. A key difference is that model-X KO is designed for variable selection, whereas PermuCATE and LOCO focus on variable importance estimation. The latter provides richer information by measuring importance (total Sobol index) rather than just binary selection. Additionally, KO handles multiple testing, while our method ensures type-I error control. Also, constructing KO variables requires complete exchangeability, a stricter condition than the conditional independence needed for CPI. The conditional randomization test (CRT) from Candès et al. 2018 is more comparable to our approaches for individual conditional independence testing but is much more computationally expensive.
Besides the work from Hines et al. 2020, we are unaware of other methods that provide model agnostic variable importance for CATE.
# Extension to groups of variables
- As mentioned in L235 and demonstrated in Appendix A.12, CPI-based approaches can be extended to groups of variables. CPI with grouping in the context of prediction problems was formally treated in https://doi.org/10.1609/aaai.v38i10.28997. There is no particular complexity when performing multi-output regression. For each variable of the group, the conditional distribution can be estimated independently and in parallel: $\forall j\in G$ estimate $\nu_j = \mathbb{E}[X^j|X^{-G}]$, with $X^{-G}$ the complementary of group G.
# Ranking of important variables
- The ranking of important variables is indeed richer information than the AUC for binary classification of important variables. However, except for simple scenarios (Gaussian covariates and linear CATE) such as those presented in Figure 1, the ground truth importance (or ranking) is not known. For IHDP, two problems prevent the analytical computation of the total Sobol index: the true conditional distribution is unknown, and the CATE function is non-linear. | null | null | null | null | null | null |
WMarkGPT: Watermarked Image Understanding via Multimodal Large Language Models | Accept (poster) | Summary: The paper introduces WMarkGPT, a multimodal large language model (MLLM) designed to understand watermarked images without requiring access to the original images. Specifically, it integrates a visual encoder, learnable queries, a visual abstractor, and an LLM to generate detailed descriptions of watermarks and predict their visibility. This work proposes a three-stage training pipeline that progressively enhances the model's ability to understand object positioning, watermark characteristics, and semantic corruption in watermarked images. The authors construct three visual question-answering (VQA) datasets: an object location-aware dataset, a synthetic watermarking dataset, and a real watermarking dataset. Extensive experiments demonstrate that WMarkGPT significantly outperforms existing MLLMs in terms of watermark description relevance and visibility prediction accuracy.
Claims And Evidence: The claims made in the paper are well-supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem.
Theoretical Claims: The paper does not present any formal theoretical claims or proofs. The focus is on empirical improvements and model design. Therefore, this aspect is not applicable.
Experimental Designs Or Analyses: The experimental designs and analyses appear sound and valid:
- The authors conducted extensive experiments on both synthetic and real watermarking datasets, demonstrating the model's effectiveness.
- The use of multiple evaluation metrics provides a comprehensive assessment of the model's performance.
- The ablation studies provide insights into the impact of different training stages and dataset sizes on model performance.
Supplementary Material: No additional supplementary materials.
Relation To Broader Scientific Literature: The key contributions of the paper are well-grounded in the broader scientific literature:
- The use of multimodal large language models (MLLMs) for image understanding aligns with recent advancements in vision-language models (e.g., Qwen, LLaVA, VILA).
- The focus on watermark understanding addresses a significant gap in existing evaluation methods, which rely on pixel-wise metrics and require access to original images.
- The proposed datasets and training pipeline build upon prior work in VQA and multimodal learning, providing a new benchmark for watermark security.
Essential References Not Discussed: The paper has cited relevant works in multimodal learning, watermarking, and corresponding evaluation metrics.
Other Strengths And Weaknesses: **Strengths**:
- The author carried out thorough ablation experiments to assess the effectiveness of the proposed method.
- The manuscript’s expression and structure enhance its readability, making it easy for readers to understand.
- This paper first proposes the use of MLLM to evaluate the content and visibility of watermarks, which is an interesting and meaningful topic.
**Weaknesses**:
- The authors do not appear to have conducted any ablation experiments to demonstrate the effectiveness of the learnable query setting for this task. I wonder whether similar performance could be achieved through data-efficient multi-stage SFT training alone ( like the paradigm of general Multi-modal understanding LLM ).
- As the WMarkGPT model is capable of directly predicting the specific location of the watermark. The benchmark seems to lack an evaluation metric for watermark position prediction. If it would be possible to design an evaluation metric, similar to object detection tasks, that includes precision and recall to assess the model's effectiveness in watermark location prediction.
Other Comments Or Suggestions: Please refer to Weaknesses and Questions.
Questions For Authors: - What is the specific network structure of Visual Abstractor? Is it a transformer block or just one layer of attention?
- Have the baseline models compared by the author been fine-tuned on the constructed WQA dataset? If not, I would suggest evaluating the performance of these models after fine-tuning them to provide a more comprehensive comparison.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for the valuable comments and recognition of the novelty and meaningfulness of our research. The reviewer acknowledged the clarity and thoroughness of our experimental design, as well as the strong evidence backing our claims. They highlighted WMarkGPT’s superior performance in watermark description and visibility prediction. The reviewer also thought that our datasets and training pipeline provided a new benchmark for watermark security. In addition, the reviewer commended our comprehensive ablation studies and found the manuscript well-structured and easy to follow.
**Q1: Effective of the learnable query**
**Answer:**
The learnable queries are employed to extract high-level semantic features from images and filter out redundant visual noise. We conducted an experiment in which we remove the learnable queries and applied multi-stage SFT directly to the model, resulting in considerable performance drops. This shows that multi-stage SFT alone is not sufficient, as it lacks the targeted feature abstraction provided by the learnable queries.
| WQA-Synthetic | BLEU-1 | ROUGE-L | LLM-Score | ACC |
| ---------------- | ------ | ------- | --------- | ---- |
| w/o Learnable Query | 0.474 | 0.433 | 82.981 | 0.613 |
| WMarkGPT | 0.488 | 0.446 | 87.751 | 0.645 |
| WQA-Real | BLEU-1 | ROUGE-L | LLM-Score | ACC |
| ---------------- | ------ | ------- | --------- | ---- |
| w/o Learnable Query | 0.401 | 0.394 | 69.788 | 0.516 |
| WMarkGPT | 0.424 | 0.418 | 71.950 | 0.546 |
**Q2: Evaluating watermark position prediction**
**Answer:** We agree that evaluating watermark position prediction is crucial. As suggested, we conducted additional evaluations to verify our performance. We employed the following prompt and GPT to assess the position prediction of our WMarkGPT:
_Task: Determine whether the following two sentences describe approximately the same watermark position, allowing for minor variations in phrasing or slight positional bias._
_Sentence 1: {predicted description}_
_Sentence 2: {ground truth}_
_Judgment Criteria:_
_1. Do they describe the same general location, even if there are small differences?_
_2. Are the differences within an acceptable range (e.g., slight shifts in coordinates or wording variations like "top-left" vs. "upper-left")?_
_3. Is there any ambiguity that affects the interpretation?_
_4. Does the predicted description avoid mentioning any position information, as the watermark is inherently invisible?_
_Expected Output:_
_Just Final Consistency Judgment: (Yes/No)_
The precision, recall, and accuracy of the classification are computed referring to the visibility/invisibility of the watermark:
| Dataset | Precision | Recall | ACC |
| ---------- | ------ | ------- | --------- |
| WQA-Synthetic | 0.851 | 0.830 | 0.823 |
| WQA-Real | 0.792 | 0.763 | 0.734 |
**Q3: Structure of Visual Abstractor**
**Answer:** The visual abstractor comprises six transformer layers, each employing cross-attention between extracted visual features and learnable queries. Through cross-attention with learnable queries, the model progressively refines visual feature extraction, ensuring these queries capture high-level semantic information while filtering out irrelevant visual clues. This design also strengthens the subsequent alignment between image features and textual descriptions. We will release the code after acceptance to clarify it.
**Q4: The experimental comparisons**
**Answer:** We appreciate the reviewer’s concern regarding the experimental setup of baseline models. To clarify, all baseline models were fine-tuned on our proposed datasets rather than directly evaluated in a zero-shot manner. | Summary: The authors innovatively propose a new multi-modal large language model WMarkGPT for watermarked image understanding. This paper points out that traditional methods rely on indicators such as PSNR, require the original image, and cannot fully evaluate the influence of watermark on content. WMarkGPT predicts the visibility of watermarks and generates detailed descriptions without the need for the original image. In addition, the authors constructed three VQA datasets and designed a three-stage training process. The experiments show that WMarkGPT is superior to existing MLLMs in understanding watermark images on synthetic and real datasets.
## update after rebuttal
Authors' rebuttal have solved my concerns, and I will keep my rating. Thanks a lot.
Claims And Evidence: Yes, the proposed methods and evaluation criteria are highly relevant and appropriate for the problem of watermark image understanding. The authors introduce WMarkGPT, a multi-modal large language model designed specifically to predict watermark visibility and generate detailed descriptions without requiring the original image. This approach effectively addresses the limitations of traditional methods like PSNR and SSIM, which rely on the original image and fail to comprehensively evaluate the semantic impact of watermarks. Additionally, the construction of three high-quality VQA datasets, including real watermark datasets, provides valuable resources for evaluating and improving the model's performance.
Methods And Evaluation Criteria: Yes, the claims made in the submission are well-supported by clear and convincing evidence. The authors provide comprehensive experiments demonstrating WMarkGPT's superior performance compared to existing multi-modal large language models on watermark image understanding tasks. The three-stage training process is validated through ablation studies, which clearly show the contribution of each stage to the final performance. The use of both synthetic and real datasets for evaluation ensures that the model's effectiveness is tested across diverse scenarios.
Theoretical Claims: N/A
Experimental Designs Or Analyses: 1. The authors construct three high quality VQA data sets, especially the marking of real watermarking data sets provides valuable resources for field research. The public commitment of the data set will greatly facilitate subsequent research.
2. The three-stage progressive training strategy proposed by the authors is reasonably designed, and the ablation experiment fully verifies the contribution of each stage to the final performance.
Supplementary Material: Yes, Related Works, Templates of Object Location-aware Dataset, and Implementation Details.
Relation To Broader Scientific Literature: WMarkGPT innovatively addresses the limitations of traditional watermark evaluation methods by proposing a novel multi-modal model that predicts watermark visibility and generates descriptions without needing the original image. This contribution, along with the creation of high-quality VQA datasets and a robust training strategy, significantly advances the field of watermark detection and multi-modal understanding
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. The content of this paper is full and the language is standard.
2. This paper points out the limitations of traditional watermark evaluation methods (such as PSNR and SSIM) that rely on the original image and cannot comprehensively evaluate the visibility and semantic impact of watermark. This paper proposes a specific solution, which has clear practical significance and significantly improves the practicability of the method.
Weakness:
1. The real watermark dataset contains only 2.5k samples, and the watermark generation methods are limited to a small number of algorithms (such as HiNet, Safe-SD). Does small data size limit the model generalization? The authors need to further explain the scope of its application.
2. It is suggested to add detailed explanations for the three-stage training strategy, such as why the visual encoder and abstractor are optimized in stages, so as to more comprehensively show the model's attention focusing ability in the watermarked area.
Other Comments Or Suggestions: 1. The parameter size, operational efficiency, training and testing time and resource usage of the model need to be further elucidated.
2. Watermark visibility prediction is also one of the important tasks of the model. Is the previous watermark related description beneficial for the final visibility prediction?
3. Please add a discussion section to further analyze the limitations of the method and suggest ways to improve it in the future.
Questions For Authors: 1. Some training details are briefly described in the main text, and the appendix supplements some of them, but the reproducibility still needs to be further refined.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your valuable comments and recognition of our innovation, comprehensive experiments, superior performance and practical significance. We appreciate that you highlighted WMarkGPT’s ability to predict watermark visibility without the original images, the newly constructed datasets, our progressive training strategy, and the benefits for subsequent research.
**Q1: The WQ-Real dataset and our application**
**Answer:**
We adopt a three-stage training pipeline to progressively enable WMarkGPT to comprehend watermarked images. In the first stage, the model is trained on a large-scale object location-aware QA dataset based on natural images. In the second stage, it is further optimized on a synthetic watermarking QA dataset, bridging the gap from natural to watermarked images. Finally, the model is fine-tuned on a real watermarking QA dataset. Because the training sets in the first two stages are large, only a relatively small and high-quality real dataset is needed to align the model with practical data distribution. The significance of dataset quality over quantity has been well-established in MLLM research. To ensure that the watermarked images closely mirror real scenarios, we conducted a thorough investigation and selected typical image watermarking algorithms—including Hidden, BalujaNet, WengNet, HiNet, and Safe-SD—to build our real dataset. Experimentally, we expanded the WQA-Real dataset step by step and found that performance stabilized at around 2.5K samples.
| WQA-Synthetic | BLEU-1 | ROUGE-L | LLM-Score | ACC |
| --- | --- | --- | --- | --- |
| 1000 | 0.299 | 0.329 | 57.654 | 0.412 |
| 1500 | 0.367 | 0.379 | 62.035 | 0.437 |
| 2000 | 0.404 | 0.400 | 64.654 | 0.502 |
| 2250 | 0.423 | 0.410 | 71.654 | 0.551 |
| 2500 | 0.424 | 0.418 | 71.950 | 0.546 |
Originally, we designed WMarkGPT to evaluate watermarked images more precisely and comprehensively, particularly when original images are unavailable for generative watermarking. The success can facilitate the development of MLLMs for other tasks involving mixed visual patterns, such as deepfake detection and edited image comprehension. In addition, our dataset collection paradigm and training strategy can also be consulted.
**Q2: Effect of optimizing visual encoder and abstractor**
**Answer:**
For brief, the staged optimization of the vision encoder and visual abstractor enables the model to progressively refine its focus on watermarked regions.
The vision encoder capture low- to mid-level semantics, providing the basis for further abstraction and alignment with the language model. The visual abstractor refines the visual features into high-level representations through a set of learnable queries. By aggregating crucial image features and filtering out noise, it delivers more compact and meaningful visual embeddings for subsequent processing by the language model. We will add more explanations in the final paper.
**Q3: Efficiency of WMarkGPT**
**Answer:** As detailed in Section 3.1 Model Architecture, WMarkGPT is built upon the LLaMA-2-7B framework (7 billion parameters). The complete model, including the vision encoder, visual abstractor and learnable queries, has a total of 8.198 billion parameters. We have made a detailed summary of training and testing costs in the response to Reviewer oNvm Q7.
**Q4: Effect of previous watermark related description**
**Answer:**
Our experiments show that predicting watermark visibility without first extracting a watermark-related description results in lower accuracy on both the WQA-Synthetic and WQA-Real datasets. This indicates that capturing key features—such as spatial structure and texture—beforehand is crucial for accurately estimating watermark visibility.
| ACC | WQA-Synthetic | WQA-Real |
| ------ | ------- | --------|
| w/o Prefix | 0.639 | 0.541 |
| WMarkGPT | 0.645 | 0.546 |
**Q5: Limitations and future works**
**Answer:** The current WMarkGPT model is trained exclusively on image datasets, whereas video watermarking and its more precise evaluations remain crucial yet underexplored aspects of copyright protection. Future research will focus on collecting video watermarking datasets and developing video-based MLLMs to understand watermarked videos, further advancing digital watermarking technology.
**Q6: More training details**
**Answer:** We provide the complete configurations for the three training phases in the table below to ensure full reproducibility. After acceptance, we will release all the training code.
| Environment | CUDA: 12.1.105 Torch: 2.4.1 | | |
| ---------------- | -----------------------|-------------|---------|
| Traning | Stage 1 | Stage 2 | Stage 3 |
| Batch size | 32 | 16 | 16 |
| Learning rate | $1e-4$ | $2e-5$| $2e-5$ |
| Epoch | 3 | 5 | 5 |
|Optimizer| AdamW | |
|Scheduler| CosineAnnealingLR | CosineAnnealingLR | CosineAnnealingLR |
| Warmup ratio | 0 | 0.03 | 0.03 |
| Precision | bf16 | | | | Summary: This paper constructs datasets comprised of watermarked images with different level of watermark visibility. The paper trains a model specified for describing watermark patterns and evaluating watermark visibility level. The authors compare their model with several existing multimodal large language models on their datasets, and they achieve better performance on four metrics. The paper is also clear and well written.
Claims And Evidence: Not really. Please check the weaknesses.
Methods And Evaluation Criteria: Partially. Please check the weaknesses.
Theoretical Claims: NA
Experimental Designs Or Analyses: Partially. Please check the weaknesses.
Supplementary Material: No
Relation To Broader Scientific Literature: Yes
Essential References Not Discussed: No
Other Strengths And Weaknesses: ## Paper Strengths
- Well written and easy to follow.
- The authors make comprehensive comparisons with other models and they achieve better results.
## Paper Weeknesses
- The authors mentioned that when constructing the “real” dataset, they randomly selected images generated by existing models. Essentially, this still relies on AI to create the dataset, meaning it is not truly a real dataset.
- In the “real” dataset constructed by the authors, only invisible watermarks are used. Given this, how do the authors later describe the position, content, and other relevant information of these watermarks?
- The description of WMarkGPT is not clear. Which LLM is used and what is its parameter size?
- Some of the evaluation metrics used by the authors require reference texts; however, the authors do not explicitly explain how these reference texts are obtained.
- The ablation study is not very meaningful, as the maximum dataset size is only 50K images, which is relatively small. It is predictable that increasing the dataset size would improve the model’s performance. Similarly, it is evident that training and fine-tuning the model would enhance its capability.
- The model comparison is unfair. It would be more reasonable to compare WMarkGPT with other models that have been fine-tuned on watermark description and evaluation.
- The authors do not discuss the efficiency of their model. How long it takes to evaluate an image and how long it takes to train the model.
- The authors do not provide the download links of their datasets.
- In fig 5, step 2, there is a misspelling 'obiect', which should be 'object'.
Other Comments Or Suggestions: No
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your insightful feedback and recognition on our clear presentation, comprehensive comparisons and better results.
**Q1: The explanation of the "real" dataset**
**Answer:**
Compared to our WQA-Synthetic dataset, which employs pseudo watermarking processes to generate watermarked images, WQA-Real uses genuine watermarking algorithms (e.g., Hidden, BalujaNet) to produce watermarked images that more closely align with evaluating data distributions. Additionally, all QA pairs and watermark visibility assessments in WQA-Real were manually annotated by trained human evaluators (Sec. 2.2 & Appendix E), ensuring that both descriptions and scores reflect genuine human perception rather than synthetic labels. WMarkGPT is proposed to comprehensively understand these watermarked images without accessing original images and further advance the development of watermarking algorithms. We use the term "real" to emphasize the distinctness of WQA-Synthetic and WQA-Real datasets.
**Q2: Invisible watermark description**
**Answer:** Ideally, watermarking algorithms generate watermarked images with fully invisible watermarks. However, in practice, existing methods often produce images with varying degrees of watermark visibility. To advance the research, we propose WMarkGPT for precise and comprehensive evaluations of these watermarked images. Our WQA-Real dataset thus includes both visible and invisible watermarks (see Fig. 4 for distribution). For genuinely invisible ones, we label them simply as “invisible” without adding spatial descriptions (Appendix E).
**Q3: Details of WMarkGPT**
**Answer:** As detailed in Section 3.1 Model Architecture, WMarkGPT is built upon the LLaMA-2-7B framework (7 billion parameters). The complete model, including the vision encoder, visual abstractor and learnable queries, has a total of 8.198 billion parameters. We will make it clear in the final paper.
**Q4: The reference texts used in LLM-Score**
**Answer:** As detailed in Appendix G, our evaluation metric LLM-Score [1][2] uses the reference template: ``Evaluate the relevance between the following description and ground truth on a scale from 0 to 4. Higher scores indicate better relevance. Return only the numeric score. - Description: *candidates* - Ground Truth: *references*."
[1]Lu Y, Yang X, Li X, et al. Llmscore: Unveiling the power of large language models in text-to-image synthesis evaluation. Neurips 2023.
[2]Huang K, Sun K, Xie E, et al. T2i-compbench: A comprehensive benchmark for open-world compositional text-to-image generation. Neurips 2023.
**Q5: The size of the WQ-Synthetic dataset**
**Answer:**
We conducted additional experiments by increasing the WQ-Synthetic dataset size. As shown in the table below, adding more training data only resulted in slight performance improvements. This might be due to the complexity of watermarked image understanding, where the model needs to predict watermark visibility and produce detailed textual descriptions of its location, content, and effect on image semantics. Our 7B-parameter LLM backbone may have reached its limit at this dataset scale. In our final paper, we will include experiments and analysis with a more powerful LLM backbone and larger datasets.
| WQA-Synthetic | BLEU-1 | ROUGE-L | LLM-Score | ACC |
| ---------------- | ------ | ------- | --------- | ---- |
| 50K | 0.488 | 0.446 | 87.751 | 0.645 |
| 60K | 0.488 | 0.448 | 87.723 | 0.638 |
| 70K | 0.490 | 0.453 | 87.758 | 0.649 |
**Q6: Model comparison**
**Answer:** We clarify that all baseline models were fine-tuned on our proposed datasets rather than directly evaluated in a zero-shot manner.
**Q7: Efficiency of WMarkGPT**
**Answer:** The training and inference costs of our model are as follows (2 Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz).
- Training: 12 hours (Training Stage 1) and 6 hours each (Training Stages 2-3) on 8×NVIDIA RTX 6000 Ada GPUs.
- Inference: 3.395 seconds for a single image (avg.) on a single NVIDIA RTX 6000 Ada GPU.
**Q8: Download links of the datasets**
**Answer:** As stated in the abstract, we will publicly release both the code and dataset upon acceptance. To comply with the double-blind review policy, we provide an anonymous download link here: https://drive.google.com/drive/folders/1JoRq91b0UAbTU4SEblCycRVwCpUgkeg2?usp=drive_link.
**Q9: Typo in Fig. 5**
**Answer:** We apologize for the typo and will correct it in the final paper.
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thank you for your response. The answers from Q3-Q9 are very helpful. However, I still have concerns on the problem setting. Could you please help to motivate me on the practical usage of this type of studies? Thank you.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thoughtful follow-up and for acknowledging our previous responses. We appreciate the opportunity of further clarifying the motivation and practical significance of our study.
As we know, the research on image watermarking is critical for copyright protection and information steganography. The research can be divided into two main groups, i.e., post-processing based watermarking and text-guided generative watermarking where watermarks are directly embedded into the generation procedure and original images are not available. The core motivation of our work stems from the limitations of traditional watermarking evaluation metrics (e.g., PSNR, SSIM, MAE), which play a fundamental role of advancing watermarking algorithms. These numerical metrics rely on original images and fail to precisely reflect human perception such as watermark visibility (referring to Figure 2 and Figure 3), which is an important watermarking efficacy indicator. Their limitations are also demonstrated by other research [1, 2]. Besides, they do not provide any assessment about embedded watermarks and the disruptions on image contents. More importantly, these metrics can not be used to evaluate generative watermarking since the original images are not accessible.
To address these limitations, we propose a reference-free, MLLM-inspired evaluation paradigm that better aligns with human perception. Our model WMarkGPT only leverages watermarked images to support question-answering and provide textual descriptions about watermark visibility, location, content, and impact on image semantics, enabling a more nuanced comprehension of watermarked images. These precise and detailed descriptions facilitate the watermarking measurement and the development of advanced algorithms.
In addition, our study has potential implications on developing MLLMs for other tasks involving complex and mixed visual patterns, such as deepfake detection, edited image comprehension, and multimodal content verification. Our dataset collection strategies and progressive training pipeline of aligning vision-language domains for fine-grained perceptual tasks offer general references for the community.
We sincerely hope our explanation helps clarify the motivations and the practical significance of our work. Please let us know if you have any further questions.
[1] Patwari K, Chuah C N, Lyu L, et al. PerceptAnon: exploring the human perception of image anonymization beyond pseudonymization for GDPR. ICML 2024.
[2] Fu S, Tamir N, Sundaram S, et al. Dreamsim: Learning new dimensions of human visual similarity using synthetic data. Neurips 2023. | Summary: This paper proposed use MLLM to detect watermarked images, the model architecture is adapted from mplug-owl 2 and proposes three training stages to progressively fine-tune model. Experimental results show improved performance.
Claims And Evidence: The paper may overstate the performance gain, i.e., Section 4.1 and Table 1, the paper does not mention whether those baselines such as Qwen2-vl are trained with the same datasets as those used in WMarkGPT. If the baseline models are zero-shot while the WMarkGPT is fine-tuned, the experimental results are unfair.
Methods And Evaluation Criteria: See claims and evidence.
Theoretical Claims: NA
Experimental Designs Or Analyses: See claims and evidence.
Supplementary Material: Yes, all
Relation To Broader Scientific Literature: This paper contributes to extending the MLLMs ability to watermark detection field.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths: the paper is well-structured and easy to follow
Weakness:
1. The experimental designs are unfair in Section 4.1 if the baseline performance is obtained in zero-shot scenario.
2. To thoroughly evaluate the effectiveness of progressive training, it is essential to introduce an additional baseline: a model trained simultaneously on all three datasets—the object position dataset, the synthetic watermark dataset, and the real watermark dataset. This approach will provide a comprehensive comparison, allowing us to assess whether progressive training offers distinct advantages over a more conventional, unified training strategy.
3. The model architecture is unchanged, more specific devise can be done to fit the watermark understanding task.
Other Comments Or Suggestions: No
Questions For Authors: No question
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your valuable comments and recognition on our clear presentation, contributions of extending the MLLMs ability to watermarked image understanding.
**Q1: Experimental setup of baseline models**
**Answer:**
We clarify that all baseline models were fine-tuned on our proposed datasets rather than directly evaluated in a zero-shot manner.
**Q2: The effectiveness of progressive training strategy**
**Answer:**
We conducted an experiment where the model was trained on all three datasets jointly in a unified manner using the same training configurations as in the progressive setting. The experimental results below indicate that the progressive training approach consistently yields much better performance compared to the unified training strategy. For example, on the WQA-Synthetic dataset, progressive training achieves an improvement of +0.064 in BLEU-1 and +5.367 in LLM-Score relative to unified training. Similar trends are observed on the WQA-Real dataset.
These results suggest that the progressive training strategy effectively mitigates the domain adaptation challenges inherent in joint training, leading to improved generalization across diverse data domains.
| WQA-Synthetic | BLEU-1 | ROUGE-L | LLM-Score | ACC |
| ---------------- | ------ | ------- | --------- | ---- |
| Unified Training | 0.424 | 0.388 | 82.384 | 0.635|
| Progressive Training | 0.488 | 0.446 | 87.751 | 0.645|
| WQA-Real | BLEU-1 | ROUGE-L | LLM-Score | ACC |
| ---------------- | ------ | ------- | --------- | ---- |
| Unified Training | 0.375 | 0.376 | 69.547 | 0.522 |
| Progressive Training | 0.424 | 0.418 | 71.950 | 0.546 |
**Q3: Architectural enhancement for watermarked image understanding**
**Answer:**
We thank the reviewer for this valuable suggestion. In the MLLM research literature [1, 2, 3], the focus has primarily been on rigorous multimodal data collection and innovative training strategies. As the first work on watermarked image understanding using MLLMs, we also place our main emphasis on the novel WQA-Synthetic/WQA-Real datasets and a three-stage training pipeline.
That said, we recognize that specialized architectural enhancements can further improve watermark understanding. To address the challenges arising from mixed image and watermark patterns, we have integrated Mixture-of-Experts (MoE) into the LLM backbone. This addition enables more effective processing of watermark-specific features alongside natural image semantics. Experiments have shown that this modification boosts performance compared to our baseline model, and we will include a detailed analysis in the final paper.
| WQA-Synthetic | BLEU-1 | ROUGE-L | LLM-Score | ACC |
| ---------------- | ------ | ------- | --------- | ---- |
| WMarkGPT | 0.488 | 0.446 | 87.751 | 0.645|
| WMarkGPT-MoE | 0.491 | 0.453 | 87.841 | 0.658 |
[1] Li, Chunyuan, Cliff Wong, Sheng Zhang, Naoto Usuyama, Haotian Liu, Jianwei Yang, Tristan Naumann, Hoifung Poon, and Jianfeng Gao, Llava-med: Training a large language-and-vision assistant for biomedicine in one day. Advances in Neural Information Processing Systems, 2023.
[2] Lin, Ji, Hongxu Yin, Wei Ping, Pavlo Molchanov, Mohammad Shoeybi, and Song Han. Vila: On pre-training for visual language models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2024.
[3] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, andSteven Hoi. InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning. Advances in Neural Information Processing Systems, 2023. | null | null | null | null | null | null |
De-AntiFake: Rethinking the Protective Perturbations Against Voice Cloning Attacks | Accept (poster) | Summary: The paper investigates the effectiveness of adversarial perturbations as a defense against voice cloning (VC) under threat models that specifically considering perturbation purification techniques used by attackers. The study finds that while existing purification methods can reduce the impact of protective perturbations, they still cause distortions in the feature space of VC models, leading to a decline in voice cloning performance. Therefore, the paper proposes a novel two-stage purification method that first purifies the perturbed speech and then refines it using phoneme guidance to better align it with clean speech. The experimental results demonstrate that this new method outperforms existing purification techniques in disrupting voice cloning defenses. Ablation experiments validate the effectiveness of each component. Adaptive protection experiments demonstrate that the proposed method exhibits a degree of robustness.
Claims And Evidence: The main claims of the paper include: (1) existing adversarial perturbations as a defense against voice cloning are vulnerable to existing adversarial purification; (2) the proposed new purification method outperforms existing methods in disrupting voice cloning defenses. Claims (1) and (2) are supported by subjective and objective experiments in Section 5.2, and the ablation study in Section 5.3 verifies the effectiveness of each component of the method in claim (2). In summary, the main claims of the paper are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed methods are make sense for addressing the problem. The evaluation criteria, including the chosen metrics and baselines (defense methods and purification baselines), are in line with established practices and influential prior work in the field, making the evaluation robust and relevant.
Theoretical Claims: This paper is primarily focused on empirical evaluation and algorithmic innovation in the domain of voice cloning defense. It does not present formal theoretical claims or mathematical proofs in the traditional sense. Therefore, there were no proofs to check for correctness, and consequently, there are no issues related to proof validity to discuss.
Experimental Designs Or Analyses: The experimental designs were checked for soundness and validity. Specifically, the designs in Sections 5.2 (main results), 5.3 (ablation study), and 5.4 (adaptive protection experiments) were checked. The designs are appropriate for testing the paper's claims, employing suitable datasets, metrics, and comparisons to relevant baselines. No issues were found regarding the soundness or validity of the experimental designs or analyses.
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper relates to the broader literature of using adversarial perturbations for voice cloning defense, as seen in works like [1-3]. The key contribution is in empirically investigating the impact of adversarial purification on these defenses in the voice cloning domain, providing valuable practical insights. While the proposed two-stage purification method combines existing techniques such as [4-6], the proposed method represents a useful improvement. The paper contributes to the field by providing a more nuanced understanding of the interplay between adversarial perturbations and purification in voice cloning, and suggests a direction for enhancing purification methods in this specific context.
[1] Yu, Z., Zhai, S., and Zhang, N. AntiFake: Using Adversarial Audio to Prevent Unauthorized Speech Synthesis. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, pp. 460-474, Copenhagen Denmark, November 2023.
[2] Huang, C.-y., Lin, Y. Y., Lee, H.-y., and Lee, L.-s. Defending Your Voice: Adversarial Attack on Voice Conversion. In 2021 IEEE Spoken Language Technology Workshop (SLT), pp. 552-559, Shenzhen, China, January 2021.
[3] Li, J., Ye, D., Tang, L., Chen, C., and Hu, S. Voice Guard: Protecting Voice Privacy with Strong and Imperceptible Adversarial Perturbation in the Time Domain. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, pp. 4812-4820, Macau, SAR China, August 2023b.
[4] Wu, S., Wang, J., Ping, W., Nie, W., and Xiao, C. Defending against adversarial audio via diffusion model. In The Eleventh International Conference on Learning Representations, 2023.
[5] Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., and Poole, B. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021.
[6] Tian, Y., Liu, W., and Lee, T. Diffusion-Based Mel-Spectrogram Enhancement for Personalized Speech Synthesis with Found Data. In 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 1-7, December 2023.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. The paper highlights the vulnerability of existing perturbation-based voice cloning defenses in the presence of adversarial purification. This is a critical security concern for current voice cloning defenses.
2. The proposed new purification method is a creative combination of existing techniques and demonstrates some originality. Experimental results demonstrate its effectiveness.
3. The paper is well-organized, and the figures and tables are clear and informative.
Weaknesses:
1. The proposed method increases time cost due to the introduction of the refinement stage.
2. In Section 4.3, "Phoneme Representation," the terms "force alignment" and "force aligner" may require clarification. This lack of clarity makes it slightly challenging for readers unfamiliar with this content to fully grasp the method's implementation.
3. Some experimental results and settings require explanation (see the "Questions For Authors" section).
Other Comments Or Suggestions: While not strictly necessary for publication, open-sourcing the code would greatly enhance the reproducibility of this work and facilitate further research in this area.
Questions For Authors: 1. The paper analyzes the limitations of existing purification methods, such as DiffWave from [2] and WavePurifier[3]. However, the paper does not analyze why DiffSpec from [2] or DualPure[4] performs worse than DiffWave or WavePurifier[3] in this task (as shown in Table 4). This is somewhat counterintuitive, as in other adversarial purification tasks, their performance difference compared to DiffWave is much smaller [2, 4]. A reasonable analysis of these results could significantly strengthen the paper. Could the authors provide an analysis of these seemingly unexpected results?
2. The paper compares the proposed method with "DS," "QT," and "Mel" methods from [1]. However, it does not include comparisons with the "Filter Power" and "LPC" methods also presented in [1]. Could the authors please either supplement the experiments to include these comparisons or provide a justification for why "Filter Power" and "LPC" methods were not included in the comparative evaluation?
3. In Table 4 (Appendix B.4), the "Parameter Value" for "DS" (Downsampling) is 1600Hz. For original audio with a sampling rate of 16000Hz, this parameter choice seems unusual for downsampling as it would lead to a significant loss of high-frequency components. The parameter also deviates from the default settings in Paper [1]. Could the authors please explain the rationale behind selecting this specific parameter value for downsampling?
[1] Hussain, S., Neekhara, P., Dubnov, S., McAuley, J., and Koushanfar, F. WaveGuard: Understanding and Mitigating Audio Adversarial Examples. USENIX, 2021.
[2] Wu, S., Wang, J., Ping, W., Nie, W., and Xiao, C. Defending against adversarial audio via diffusion model. In The Eleventh International Conference on Learning Representations, 2023.
[3] Guo, H., Wang, G., Chen, B., Wang, Y., Zhang, X., Chen, X., Yan, Q., and Xiao, L. WavePurifier: purifying audio adversarial examples via hierarchical diffusion models. In Proceedings of the 30th Annual International Conference on Mobile Computing and Networking, pp. 1268–1282, 2024.
[4] Tan, H., Liu, X., Zhang, H., Zhang, J., Qian, Y., and Gu, Z. DualPure: An Efficient Adversarial Purification Method for Speech Command Recognition. In Interspeech 2024, pp. 1280–1284. ISCA, September 2024.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful and valuable comments. We respond to each comment as follows and sincerely hope that our rebuttal could properly address your concerns.
# Weaknesses
>W1. The proposed method increases time cost due to the introduction of the Refinement stage.
We agree that the Refinement stage increases computational time compared to single-stage purification. However, voice cloning attacks are fundamentally offline processes. Attackers typically have **hours/days** to prepare critical audio samples (e.g., CEO's voice for fraud), making increased computation, which is **seconds-level** as shown in Table 8 still acceptable for these scenarios. For instance, an audio sample of 15 seconds only needs 13 seconds for our method to process.
>W2. The terms "force alignment" and "force aligner" may require clarification.
- The Force Aligner mentioned in our paper refers to the Montreal Forced Aligner (https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner), which is a **tool** used for aligning text with audio. Specifically, it works by first converting the text into a sequence of phonemes. Then, using a pre-trained speech recognition model, it aligns the start and end times of each phoneme to precise timestamps within the corresponding audio.
- Force alignment, on the other hand, refers to the **process** of performing this audio and text alignment using the Montreal Forced Aligner.
To prevent any confusion, we will provide explicit definitions for both "force aligner" and "force alignment" in Section 4.3 of the revised paper.
# Questions (& Weakness 3)
>Q1. Please analyze why DiffSpec and DualPure perform unexpectedly worse than DiffWave and WavePurifier in your task compared to other purification tasks.
The relatively poorer performance of DiffSpec [2] and DualPure [4] can be attributed to their mel-spectrogram processing parameters. These methods operate on **spectrograms with 32 mel-bins**, which were initially designed for classification tasks. While 32 mel bins are sufficient for classification (for the classification models are trained on inputs of the same dimensionality), reconstructing waveforms from such a low-resolution spectrogram can lead to significant distortion.
However, existing voice cloning models [7, 8] typically utilize **mel-spectrograms with 80 mel bins or more** as input, relying on high-fidelity spectral features. In contrast, WavePurifier [3] employs a higher resolution (256x256) spectral representation, which preserves the essential details for voice cloning, thereby performing significantly better than DiffSpec [2] and DualPure [4] in our experiments. DiffWave, operating directly in the waveform domain, avoids this type of distortion, similarly leading to better results.
We will include this analysis in Section 5.2 or the Appendix of the revised paper.
>Q2. Omitted comparisons with the "Filter Power" and "LPC" methods.
Filter Power and LPC from [1] were omitted in our initial experiments due to:
- Filter Power functionality overlaps with BPF in [5] (both are band pass filter methods),
- LPC functionality overlaps with MP3 in [5] (both are compression methods).
We acknowledge the importance of direct comparison and conducted new experiments. As shown below, **both methods underperform our proposed approach** (results to be added to Table 5):
Method|xSVA|dSVA
-|-|-
Filter Power| 0.222|0.133
LPC|0.121|0.026
Ours|0.711|0.818
>Q3. The "Parameter Value" for Downsampling (1600Hz) needs explaination.
The "1600Hz" entry in Table 4 is a typo. The correct downsampling rate is **8000Hz**, aligning with common practices in prior voice cloning defenses and adversarial purification [2, 5, 6]. For completeness, we also evaluated [1]'s default parameter (6000Hz), achieving:
Method|Downsampling rate|xSVA|dSVA
-|-|-|-
Down-Up Sampling|6000Hz|0.204|0.140
Down-Up Sampling|8000Hz|0.168|0.139
Ours|N/A|0.711|0.818
**both of them underperform our proposed approach**. We will correct this typo in Table 4.
# Other Comments or Suggestions
>Open-sourcing the code would greatly enhance the reproducibility of this work and facilitate further research in this area.
Yes, we will make our code publicly available once our paper is accepted.
[1] WaveGuard: Understanding and Mitigating Audio Adversarial Examples. USENIX21.
[2] Defending against adversarial audio via diffusion model. ICLR23.
[3] WavePurifier: purifying audio adversarial examples via hierarchical diffusion models. MobiCom24.
[4] DualPure: An Efficient Adversarial Purification Method for Speech Command Recognition. Interspeech24.
[5] Towards Understanding and Mitigating Audio Adversarial Examples for Speaker Recognition. TDSC22.
[6] AntiFake: Using adversarial audio to prevent unauthorized speech synthesis. CCS23.
[7] Yourtts: Towards zero-shot multi-speaker tts and zero-shot voice conversion for everyone. ICML22.
[8] Better speech synthesis through scaling. arXiv preprint 23.
---
Rebuttal Comment 1.1:
Comment: Thank you for clarifying the comments. I will keep the score.
---
Reply to Comment 1.1.1:
Comment: Thank you for confirming the clarification. We appreciate your valuable feedback and support. | Summary: This paper investigates the vulnerabilities of protective perturbation-based voice clone (VC) defenses, demonstrating that these defenses are susceptible to existing adversarial purification techniques. Additionally, the authors propose an enhanced two-stage adversarial purification method that mitigates embedding inconsistencies caused by current methods, thereby further exposing the weaknesses of these VC defenses. Even with full access to the gradient information of their purification model, the study underscores the urgent need for more advanced techniques to prevent unauthorized data usage in VC. Both objective and subjective evaluations confirm the superior performance of their purification method.
Claims And Evidence: The claims made in this paper are verified by experimental results, which include both objective and subjective evaluations.
Methods And Evaluation Criteria: The issue highlighted in this paper is important. The systematic evaluation of protective perturbation-based VC defenses serves as a warning to the community that such perturbations can be mitigated by adversarial purification methods. The proposed two-stage adversarial purification method further emphasizes these risks. Experimental results support their claim.
Theoretical Claims: The problem addressed in this paper is practical, and there are no theoretical claims made.
Experimental Designs Or Analyses: - The selected voice cloning method, protection methods, and adversarial purification baselines are based on recent work.
- The experimental designs in this paper are comprehensive, including both objective and subjective evaluations.
Supplementary Material: Yes, I reviewed the supplementary material, including the extended ablation study and visual results.
Relation To Broader Scientific Literature: - Voice Clone Attacks
- Adversarial Examples
- Adversarial Purfication Methods
Essential References Not Discussed: To my knowledge, important references are included in this paper.
Other Strengths And Weaknesses: Strength:
1. The problem addressed in this paper is meaningful for society.
2. The motivation behind the two-stage purification method is well articulated.
3. The experiments are comprehensive, and the effectiveness of different components in their method is validated through an ablation study.
4. The threat model is well considered, including adaptive attacks, and the experimental results demonstrate that their purification method is difficult to mitigate even in a white-box scenario.
Weaknesses:
1. The process of purification, specifically unconditional diffusion, requires further explanation. For instance, Equations (3) and (4) should explicitly include X_{adv}
2. The training details are insufficiently clear. For example, the statement "The Purification model is a pretrained unconditional DiffWave model" contrasts with the context, which states, "Our method trains two models separately." The authors should clarify the detailed settings.
3. In Figure 11, the authors note that "Existing purification methods tend to produce samples with similar patterns and blurred details," but the similar patterns are difficult to discern. This section also requires further clarification.
Other Comments Or Suggestions: 1. The description of the summarized contributions could be refined, for example, by directly highlighting the specific risks identified.
2. Typos: In the caption of Figure 2, "(b-c), Purified" should be corrected to "(b-c), purified."
Questions For Authors: Refer to Weakness.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful and valuable comments. We respond to each comment as follows and sincerely hope that our rebuttal could properly address your concerns.
# Weaknesses
>W1. The process of purification, specifically unconditional diffusion, requires further explanation. For instance, Equations (3) and (4) should explicitly include $x_\text{adv}$.
In Equations (3) and (4), $x_t$ refers to audio sample at time $t$, $x_\text{adv}$ refers to adversarial audio sample, and $t$ ranges from $1$ to $T_\text{pur}$. The foward process starts with $x_{0}$ ($x_\text{adv}$) and generates noisy sample $x_{T_\text{pur}}$; the reverse process starts with $x_{T_\text{pur}}$ and generates $x_{0}$ ($x_{\text{pur}}$). We will clarify these terms in Section 4.2 as follows:
- (**Revision**) At each timestep $t$, the process is expressed as:
$q(x_t | x_{t-1}) = \mathcal{N}(x_t; \sqrt{1-\beta_t} x_{t-1}, \beta_t \mathbf{I}), t = T_\text{pur}, T_\text{pur}-1, ..., 1$
where $\beta_t$ denotes the variance schedule, and $x_0 = x_\text{adv}$.
- The reverse process denoises the waveform $x_{T_\text{pur}}$ in the same $T_\text{pur}$ steps, and at each step, it is formulated as:
$p_{\theta}(x_{t-1} | x_t) = \mathcal{N}(x_{t-1}; \mu_\theta(x_t, t), \sigma_t^2 \mathbf{I}),$
where $\mu_\theta(x_t, t)$ is the mean function parameterized by $\theta$, $\sigma_t^2$ is the time-dependent variance schedule, and $x_0 = x_\text{pur}$.
>W2. The training details are insufficiently clear. For example, the statement "The Purification model is a pretrained unconditional DiffWave model" contrasts with the context, which states, "Our method trains two models separately." The authors should clarify the detailed settings.
Thank you for pointing out this ambiguity. We will clarify the description, avoiding the term "pretrained" to describe a model fine-tuned on a specific dataset as follows:
- (**Revision**) The Purification model is **based on** a pretrained unconditional DiffWave model which is **then fine-tuned** on the LibriSpeech dataset.
>W3. In Figure 11, the authors note that "Existing purification methods tend to produce samples with similar patterns and blurred details," but the similar patterns are difficult to discern. This section also requires further clarification.
We apologize for not clearly explaining "the similar patterns." This was because the pattern is not evident from the small number of examples provided. We will address this by adding more examples in Figure 11 to clarify these patterns.
# Other Comments or Suggestions
>C1. The description of the summarized contributions could be refined, for example, by directly highlighting the specific risks identified.
We will refine the descriptions of our contributions in the introduction by directly highlighting the specific risks identified. Details are as follows:
- (**Revision**) We assess six VC methods and three protective techniques, revealing **the risk that existing defenses potentially fail to prevent voice cloning attacks**.
>C2. Typos: In the caption of Figure 2, "(b-c), Purified" should be corrected to "(b-c), purified."
Thank you for pointing out this typo. We will correct it in our revised paper. | Summary: The paper evaluates limitations of existing purification methods in countering adversarial perturbations designed to block unauthorized voice cloning (VC), revealing they cause feature distortions that degrade VC performance. A novel two-stage purification method is proposed, combining perturbation removal with phoneme-guided refinement to align purified speech with clean speech distribution. Experiments demonstrate this approach outperforms state-of-the-art methods in disrupting VC defenses, highlighting vulnerabilities in current adversarial perturbation-based security strategies.
Claims And Evidence: The paper's claims are appropriate and well-supported. The authors claim that they are the first to explore vulnerabilities of protective VC (voice cloning) defenses, which is accurate and justified, as demonstrated through comprehensive experiments validating the proposed method's superiority. The substantial experimental evidence effectively reinforces the validity of their claims.
Methods And Evaluation Criteria: The method presented in this paper demonstrates important innovation, proposing a novel two-stage purification-refinement architecture. Additionally, the use of evaluation metrics such as xSVA, dSVA, and Mean Opinion Score (MOS) is well-justified and contextually appropriate for assessing the framework's effectiveness.
Theoretical Claims: The paper elaborates on the purification process in detail via Equations (3)-(5).
Experimental Designs Or Analyses: The experimental and analyses of this paper are sound and valid. It compares with many adversarial purification baselines and analyzes the results on many tables and visualization figures.
Supplementary Material: The supplementary materials provide detailed code implementation, which aligns with the implementation details described in the paper.
Relation To Broader Scientific Literature: None
Essential References Not Discussed: None
Other Strengths And Weaknesses: **Strength**
1. The writing of this paper is smooth and well-organized. The task scenario is clearly defined through the threat model and Figure 1, while the method is introduced in a general-to-specific manner, providing a logical and easy-to-follow structure. Additionally, the related works are presented in a clear and concise way.
2. The experiments are extensive and comprehensive, involving a wide range of voice cloning and protection methods. The comparison methods include most SOTA purification methods. Additionally, detailed ablation studies are provided to thoroughly analyze the impact of different components and parameters, further enhancing the robustness and credibility of the research.
**Weakness**
1. The purification method employed is relatively outdated, specifically an unconditional DiffWave model. Could the proposed method be generalized to other audio diffusion models?
2. Although the time complexity presented in Table 8 is acceptable, it is suggested that the authors consider adopting some diffusion model acceleration sampling techniques or flow-matching models to further improve the speed of purification. Additionally, I am curious about the respective time costs of the purification stage and the refinement stage. Please list them.
3. In Section 4.2, the sampling process of the diffusion model is not clearly articulated. It is recommended that the authors replace Equation 5 with an iterative sampling formula (such as DDPM or DDIM).
Other Comments Or Suggestions: Please refer to the weakness.
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful and valuable comments. We respond to each comment as follows and sincerely hope that our rebuttal could properly address your concerns.
# Weaknesses
>W1. The purification method employed is relatively outdated, specifically an unconditional DiffWave model. Could the proposed method be generalized to other audio diffusion models?
Recent audio diffusion models are often designed for specific tasks such as conditional generation or speech enhancement, and therefore, the majority of them are conditional models. We experimented with applying newer conditional models [1, 2] to our current task, and the table below shows the dSVA after applying only the Purification stage with different models:
| Purification Model | dSVA |
| --- | --- |
| Uncond. DiffWave | 0.647 |
| SGMSE [1] | 0.638 |
| DMSE [2] | 0.598 |
We find that their performance was generally not as good as unconditional DiffWave. We infer that **an unconditional model is more suitable as our Purification model**, possibly because it lacks conditional constraints and might be more robust to unseen noise.
Despite this, we attempted to apply another audio diffusion model to our task, specifically the one from [2], to investigate the potential of our method to generalize to other diffusion models. The resulting dSVA values are presented in the following table. The results indicate that when using another audio diffusion model as the Purification model, our Refinement stage also improved performance. This suggests that **our method has the potential to generalize to other audio diffusion models.**
| Purification Model | dSVA (Purification Stage Only) | dSVA (Full Model) |
| --- | --- | --- |
| DMSE [2] | 0.598 | 0.622 |
>W2. It is suggested that the authors consider adopting some diffusion model acceleration sampling techniques or flow-matching models to further improve the speed of purification. Additionally, I am curious about the respective time costs of the purification stage and the refinement stage. Please list them.
- We list the processing time per second of audio for the Purification and Refinement stages separately in the table below. The results indicate that the Refinement stage takes up the majority of the processing time.
| | Purification Stage | Refinement Stage (N=15) | Refinement Stage (N=30) |
|----|----|----|----|
| Time Cost (s) | 0.152 | 0.715 | 1.253 |
- Your suggestion regarding acceleration is very constructive. We agree that acceleration sampling or flow-matching models could be helpful for real-world efficiency. Due to time constraints, we will explore potential accelerated sampling techniques or more efficient models in our future work to enhance the practicality of our method for real-world applications and advance the field.
>W3. In Section 4.2, the sampling process of the diffusion model is not clearly articulated. It is recommended that the authors replace Equation 5 with an iterative sampling formula (such as DDPM or DDIM).
We will clarify the sampling process in Section 4.2, including replacing Equation 5 with the following iterative sampling formula:
$x_{t-1} \sim p_{\theta}(x_{t-1} | x_t) = N(x_{t-1}; \mu_{\theta}(x_t, t), \sigma_t^2 {I}), t = T_\text{pur}, T_\text{pur}-1, ..., 1$
where $x_0 = x_\text{pur}$.
[1] Richter J, et al. Speech enhancement and dereverberation with diffusion-based generative models. IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP), 2023.
[2] Tian Y, Liu W, Lee T. Diffusion-Based Mel-Spectrogram Enhancement for Personalized Speech Synthesis with Found Data. IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 2023. | Summary: Some works claim that an individual can protect their audio samples from voice cloning via perturbations that induce odd behavior from the voice cloning model (generative), or from voice classification models (discriminative). Another set of works show that these defensive perturbations can be removed at least for classification models, effectively removing the defensive capabilities. But the authors contend that these methods also remove natural features and thereby induce a distribution shift from the original audio. This is apparently fine for discriminative models that don't need all the features, but not for generative models which do. So the voice cloning no longer works. The authors thus propose an additional phenome-based refining step, that essentially maps from the shifted distribution back to the original distribution.
Claims And Evidence: - One should be careful about using the "human" notation of H. This is very hard to define. My human is different from yours. You eventually try some proxy evaluation using the embeddings to show reduced distortion using your method. But this doesn't necessarily imply reduced distortion to a human. A somewhat silly example would be me finding point using PGD in the embedding space that is within some ball of the clean audio, but when has no meaning when mapped back to the input space. I'm not sure how exactly you can change this. Simply adding some author evaluation would be fine I think.
Methods And Evaluation Criteria: - Quite honestly Figure 2 does not easily communicate that there is reduced inter-class separability after applying diffusion-based-perturbation-removal. The points are too small, and there are too many speakers, i.e., colors. Can you also show that embedding results after applying only step 1 of your method? That should align with (b) and (c), and would communicate that you're doing +1 step on top of them.
Theoretical Claims: NA
Experimental Designs Or Analyses: - I am not convinced entirely by your adaptive protection discussion. I can preface this by saying I am not asking for more experiments, or that you are even mandated to explore these routes given the scope limits enforced by the page limits of an ML venue paper. But, you should discuss the adaptive attack works that have been effective against adversarial purification like DiffAttack (https://proceedings.neurips.cc/paper_files/paper/2023/file/ea0b28cbbd0cbc45ec4ac38e92da9cb2-Paper-Conference.pdf). Sure, it is the audio domain. But what would a defender need to do to run something like DiffAttack here, and make their perturbations survive? It seems rather intuitive to me that BPDA+EoT wouldn't work that well here since the papers you cite evaluated them and were also "robust" to them. Again, the onus is on the defenders here to make a better defense, but improved discussion would improve your paper.
Supplementary Material: NA
Relation To Broader Scientific Literature: The paper falls in line with attacks against perturbation-based training disruption defenses in other domains like face recognition and style transfer.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: - Strengths:
- The community is moving towards a consensus that protective perturbations do not work for the image domain, a la glaze. It is nice to show this for audio.
- First to evaluate for cloning, which honestly is the more realistic threat, at least when compared to discriminative applications.
- Weaknesses/questions:
- I think there is some assumption here (that likely holds true since your experiments work) that the two audio distributions induced by applying diffusion-based-perturbation-removal (your first step) to clean and defensively-perturbed audio are nearly the same. So, that would be why you can train your refinement model that maps back to the original distribution using some pairs from - (clean sample, diffusion-based-perturbation-removal(clean sample)). And then you can apply this at inference time to some diffusion-based-perturbation-removal(defensively-perturbed point), and expect to obtain a clean sample.
- It's not very intuitive to me that the diffusion-based-perturbation-removal always outputs the same or nearly same audio distribution regardless of the input distribution being clean or adversarial. That in and of itself is an observation, so i think you should mention this at least somewhere. If it were not true you would need to run these attacks to generate training data for refinement, and that seems more likely to fail to generalize to unseen attacks.
Other Comments Or Suggestions: - Can you please explain in the text how voice-cloning models work? At least the inference? Figure 1 is not explicit enough. I understand you use audio embeddings from the victim, and I am guessing you take text embeddings from the adversary? Or is that also audio embeddings? I suppose it's not critical to the paper but it's rather unfortunate that I went through the entirety of this paper and still don't have the remotest idea how voice cloning works.
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful and valuable comments. We respond to each comment as follows and sincerely hope that our rebuttal could properly address your concerns.
# Claims And Evidence
>Concern about subjective 'human' notation (H). Embedding proxy may not reflect human perception. Suggest adding author evaluation.
- We follow [1] using similar notation $H$. We will make this clearer in the paper: $H(\cdot)$ represents the *perceived speaker identity* evaluated by human listeners with given audio.
- To clarify, (clean, protected, purified) speech is the *reference speech* (VC model input). *Synthesized speech* is the VC model output, potentially used to deceive humans/SV models. We used the VC model's embedding as a proxy evaluation for 'purified speech' as it's the VC model's input; our goal is reduced distortion for the cloning model in 'purified speech' to facilitate successful voice cloning (deceiving SV models/humans). Humans don't evaluate 'purified speech'. However, synthesized speech (VC model output from source speech) may deceive humans.
- Therefore, we have a subjective test (Sec. 5.2) with 20 humans evaluating *synthesized speech* from (clean, protected, purified) inputs, aligning with our threat model's $H$. For the purified speech, we don't involve human evaluation in the threat model (Sec. 3), so we think it is enough to use VC model embedding as proxy evaluation.
# Methods And Evaluation Criteria
>Fig. 2 unclear on reduced inter-class separability. Show embeddings after step 1 and full 2-steps for comparison.
- We add some arrows pointing from clean to purified samples within Fig. 2(b-c) of anonymous link https://github.com/de-antifake/1/blob/main/2.png to better convey the reduced inter-class separability.
- Fig. 2(c) already shows the embedding results after applying only step 1 (our method uses DiffWave for Purification stage, which is consistent with AudioPure), while Fig. 2(d) shows the results of applying the complete 2-step method.
# Experimental Designs Or Analyses
>Discuss adaptive attack against purification like DiffAttack in the audio domain.
- We actively experimented with DiffAttack's core mechanisms (deviated-reconstruction loss and segment-wise forwarding-backwarding) following public code on an RTX A6000 (48GB VRAM). However, audio data's high resolution (e.g., 96k data points for 6s audio at 16kHz vs. 1k for DiffAttack's CIFAR-10) caused out-of-memory issues for DiffAttack on our task.
- Possible solutions include using a lower-resolution diffusion model as surrogate, or processing long audio in chunks to reduce memory overhead. Furthermore, using accelerated sampling methods as surrogate to reduce computational graph depth might also help. Overall, running something like DiffAttack in the audio domain is challenging for defenders due to the high cost of calculating diffusion model gradients. Due to time, we didn't do further experiments, but future work can explore these adaptive defense methods. We will add this discussion to Sec. 5.4 or App. D in the revised paper.
# Weaknesses/Questions
>W1. & W2. There is an assumption: diffusion-based removal leads to similar distributions for clean & protected audio in the paper, which allows training refinement on (clean, purified(clean)) and applying to purified(protected). This non-intuitive observation should be mentioned in the paper.
We show embedding distributions of Clean vs Protected, and Purified(Clean) vs Purified(Protected) Speech in anonymous link https://github.com/de-antifake/1/blob/main/3.png. We observed that diffusion-based perturbation removal brings clean and protected audio to similar distributions, which is a key assumption for our method's effectiveness. Thus, we don't need to generate adversarial data for training. We will mention this observation in the revised paper to clarify the Refinement stage's motivation.
# Other Comments or Suggestions
>Explain voice cloning model inference in the text.
We apologize for not providing a clear explanation of how VC models work. Our work considers two main types of voice cloning: Text-to-Speech (TTS) and voice conversion. Both leverage speaker embeddings extracted from the victim's audio to capture their voice characteristics.
- For TTS-based voice cloning, the attacker provides arbitrary text. The TTS acoustic model, conditioned on the victim's speaker embeddings, takes this text and generates corresponding acoustic features (e.g., mel-spectrograms). These acoustic features are then used by the vocoder to synthesize the cloned voice.
- For voice conversion, the model typically takes the acoustic features of an arbitrary speaker's utterance provided by attacker (representing the content) and transforms them using the victim's speaker embeddings to generate speech with the victim's voice.
We will provide a detailed explanation of these mechanisms in the revised paper.
[1] Antifake: Using adversarial audio to prevent unauthorized speech synthesis. CCS23. | Summary: The paper "De-AntiFake" systematically evaluates current voice cloning defense mechanisms that use protective perturbations, revealing their vulnerability to adversarial purification techniques. To demonstrate this vulnerability, the authors propose a novel two-stage purification method called PhonePuRe that combines unconditional diffusion with phoneme-guided refinement to effectively bypass these protections. Through extensive experiments across six voice cloning methods and three protection techniques, they show their method significantly outperforms existing purification approaches, achieving higher speaker verification accuracy and better perceptual similarity even against adaptive protections.
Claims And Evidence: The claims are generally well-supported with comprehensive experimental evidence. The authors thoroughly evaluate three protection methods (AntiFake, AttackVC, VoiceGuard) against six VC models, showing convincingly that their method achieves higher speaker verification accuracy.
Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate for the problem. The speaker verification accuracy (SVA) metrics (xSVA and dSVA) are standard and appropriate measures for evaluating the effectiveness of voice cloning and protection bypassing. The objective MOS score and human evaluation of perceived speaker similarity provide complementary evidence for perceptual assessment.
Theoretical Claims: The author should consider developing theory for why their proposed is able to perform effective VC against various protection. E.g. certain transformation/diffusion computation can mitigate the bounded adversarial noise. It would be helpful to know to what extent (protective setup) we can view this 2-stage procedure as effective. The condition could be sth like the signal is not degraded too much from the transformation
Experimental Designs Or Analyses: The experimental design is sound:
The evaluation dataset is appropriate and of sufficient size. The comparison with five existing adversarial purification methods is comprehensive. And the discussion of adaptive defense protection is very helpful.
Supplementary Material: All supplementary are sound.
Relation To Broader Scientific Literature: I think this work points a good direction for the community where people focused on the protective perturbations.
Not aware.
Essential References Not Discussed: Not aware.
Other Strengths And Weaknesses: - Ablation study like the discussion of adaptive protection is very well-conducted.
Weaknesses:
- The paper could benefit from more discussion about the practical limitations of their attack (e.g., computational requirements for real-world deployment)
- The author should consider include some theoretical discussion which can give a better understanding on the limitations of this methods.
- The paper could explore more deeply whether there are fundamental limitations to protective perturbation approaches given the effectiveness of their purification method
- The author should define dSVA and xSVA with explicit math formulation.
Other Comments Or Suggestions: - Explore Trade-offs Between Attack Success and Audio Quality: Consider a more detailed analysis of the trade-offs between successful bypassing of protections and the resulting audio quality.
Questions For Authors: - How does your PhonePuRe method perform when dealing with speech in different languages outside of the training data? Since phoneme representations are language-dependent, would your refinement stage maintain its effectiveness for languages with significantly different phonetic structures?
- The paper demonstrates the vulnerability of current protective perturbation methods, but what do you believe are the most promising directions for creating more robust voice cloning defenses that could withstand purification attacks like yours?
- Your time cost analysis in Appendix C.5 shows that PhonePuRe requires more computational resources than some existing methods. Have you explored potential optimizations that could reduce this overhead while maintaining similar performance?
- Your experiments focus on zero-shot voice cloning models. Would your conclusions and the effectiveness of your method change when considering few-shot or many-shot voice cloning attacks where attackers have access to more reference audio?
- The paper mentions that your method performs better for male speakers than female speakers. Could you elaborate on the underlying technical reasons for this gender disparity and potential approaches to address it?
- Given that your method significantly reduces the effectiveness of current protective perturbations, do you think there is a fundamental limitation to adversarial perturbation-based defenses against voice cloning, or is this an arms race that will continue to evolve?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful and valuable comments. We respond to each comment as follows and sincerely hope that our rebuttal could properly address your concerns.
# Weaknesses
>W1. More discussion about practical limitations.
Our implementation utilized an RTX A6000 GPU and Xeon Gold 6130 @2.1GHz CPU. The peak computational resource usage is detailed below:
||GPU RAM|CPU RAM|GPU Usage|CPU Usage
-|-|-|-|-
Train|25.7G|6.0G|100%|433%
Inference|9.2G|5.2G|100%|124%
These requirements indicate that executing our attack is **not feasible on lightweight devices, and requires moderate computational resources** as illustrated in our threat model (Sec. 3).
>W2. **(& Theoretical Claims)** Consider developing theory for better understanding of proposed method's interpretability and limitations.
Thank you for your insightful suggestion. According to the proof in [1], as the forward diffusion time $t$ increases, the KL divergence between the clean distribution $p_t$ and the adversarial distribution $q_t$ is monotonically decreasing, meaning the clean and the adversarial distribution become closer. This provides theoretical support for our Purification stage to effectively mitigate adversarial noise. Building upon this, developing a systematic theory specific to our 2-stage method will lead to a better understanding of our method's limitations and interpretability. However, due to time/space limitations, we can only explore this issue in future work.
>W3. **(& Question 6)** Do you think there is a fundamental limitation to protective perturbations?
Though our method shows limitations in existing VC defenses, we did not 100% success in bypassing the protections, indicating that current defenses still offer some level of protection. Furthermore, as is common in all areas of information security research, defenders will likely develop countermeasures against our purification. Therefore, we believe **it is premature to assert a fundamental limitation of existing protections, and the arms race between attack and defense will likely continue.**
>W4. Define dSVA and xSVA with math formulation.
We will add definitions for SVA like follows: $dSVA = \frac{1}{N} \sum_{i=1}^{N} SV_d(x_{test}^i, x_{clean}^i)$.
# Questions
>Q1. Cross-lingual performance of the Refinement stage.
We show the results of a small-batch inference on Russian LibriSpeech (non-Latin script) below. Our Refinement stage **remains effective** outside the training data (English).
| |Protected|w/o Refinement|Full Model
-|-|-|-
dSVA|0.21|0.93|0.98
>Q2. Future directions for robust VC defenses.
We think embedding protective perturbations in higher-level semantic features for robustness appears promising. Combining perturbations with other defenses like proactive watermarking for a multi-layered defense is another promising direction.
>Q3. Optimizations for computational overhead.
Our paper focuses on revealing existing defense vulnerabilities, and we believe the current time cost (second-level) is acceptable for offline attacks. Your question is constructive, due to time, we will strive to optimize in future work to enhance real-world efficiency.
>Q4. Effectiveness for few/many-shot VC attacks.
Although existing VC defenses [2-4] don't consider few/many-shot scenarios, we run a small-batch VC attack where attacker have 5 reference audio and find our method **remains effective**:
| |Protected|AudioPure|Ours
-|-|-|-
dSVA|0.03|0.17|**0.82**
>Q5. Reasons and potential solutions for gender disparity.
Figure in anonymous link https://github.com/de-antifake/1/blob/main/1.png show protective perturbations mainly affect higher frequencies where female voices have more energy. We infer **the frequency overlap makes separating perturbations from female voice characteristics harder** during purification, explaining the disparity. Potential solutions include hierarchical purification: use different purification steps for high/low frequencies could mitigate this.
# Other Comments Or Suggestions
> Explore trade-offs between attack success and audio quality.
We **do not observe a clear trade-off** between audio quality and attack success. We use PESQ to evaluate the audio quality of the purified audio using a small batch of data:
Purification Steps|1|2|3|4|5
-|-|-|-|-|-
PESQ|1.56|1.81|**1.88**|1.87|1.81
dSVA|0.87|0.94|**0.96**|0.92|0.90
As purification steps increases, both PESQ and dSVA initially improve and then decline, reaching their optimal values at the same step. This suggests that the protective noises degrades both audio quality and attack success, and proper number of purification steps improves both.
[1] Diffusion Models for Adversarial Purification. ICML22.
[2] AntiFake: Using Adversarial Audio to Prevent Unauthorized Speech Synthesis. CCS23.
[3] Defending Your Voice: Adversarial Attack on Voice Conversion. SLT21.
[4] Voice Guard: Protecting Voice Privacy with Strong and Imperceptible Adversarial Perturbation in the Time Domain. IJCAI23. | null | null | null | null |
Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction | Accept (oral) | Summary: The paper analyzes the effectiveness of multi-token prediction in open-ended algorithmic tasks that require combinatorial and exploratory creativity. It argues that transformers trained with next-token prediction struggle in these tasks, whereas transformers using multi-token prediction or diffusion models perform better.
Claims And Evidence: The claim is well supported by experiments conducted across three types of architectures: the pre-trained Gemma, the SEDD diffusion model, and GPT-2, and four types of tasks.
Methods And Evaluation Criteria: While this paper does not introduce any new methods, I believe that designing new benchmark tasks is a valuable contribution. The authors also provide diversity and memorization scores, along with performance metrics, which I consider to be appropriate evaluation criteria. However, it would be better to define the memorization score in the main text, not in the caption in Figure 3.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The proposed tasks are well designed, and drawing an analogy with the wordplay “What kind of shoes do spies wear? Sneakers.” helps me better understand their purpose. The authors effectively present the experimental results using appropriate plots and clear visualizations.
Supplementary Material: I reviewed Appendix B to gain a better understanding of the multi-token prediction objective. I also attempted to review Appendix D to find an example of the data, but it was not provided. It would be better to include an example for each task.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Overall, the paper is well written and well organized. I did not find any major drawbacks.
Other Comments Or Suggestions: N/A
Questions For Authors: I am very interested in the hash-conditioning experiment and have a few questions:
1. What exactly does a null prefix refer to? Does it mean that there is no additional tokens between the input and output?
2. From my understanding, the role of the random hash prefix is to retrieve a random sibling-parent triplet (or triangle, circle, etc.) from the input graph. Does this interpretation make sense?
3. If possible, could you run experiments where the transformer predicts the output while attending only to the hash prefix? This experiment would help clarify why the hash prefix enhances performance in transformer models while having minimal impact on diffusion models.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your time and effort in evaluating the paper! We are excited that you appreciate the analogy between wordplay and our tasks! We are also happy that you found (a) the tasks well-designed, (b) hash-conditioning interesting and (c) did not find any major drawbacks with the work!
We respond to your questions below.
> the role of the random hash prefix is to retrieve a random sibling-parent triplet (or triangle, circle, etc.) from the input graph. Does this interpretation make sense?
Exactly! Intuitively, we view it as a random seed that guides the model in its random search over the graph to identify the siblings/triangles.
For what it’s worth, more abstractly, we believe hash-conditioning may help via two mechanisms:
- **More representation:** In softmax sampling, if the model has to maximize diversity, the model must maintain multiple diverse thoughts in its representations so that one of them can be sampled at output. Hash-conditioning allows the model to fixate on just “one thought for a given hash”. This way the model can carefully represent one thought while being able to produce diverse thoughts for different hashes.
- **Better planning:** Fixing the randomness ahead of time could allow the model to plan and co-ordinate multiple random decisions in tandem.
> While this paper does not introduce any new methods,
We wish to gently highlight that hash-conditioning is a novel method we put forth even if we only explore it on our algorithmic test bed. We believe this opens significantly new algorithmic possibilities for creativity in generative models e.g., could one pretrain language models with such hash strings (or by randomly inserting “hash tokens”) to encourage diversity?
> What exactly does a null prefix refer to? … no additional tokens between the input and output?
You’re right! A null prefix indicates there are no additional tokens between the input and output sequences. Sorry about the lack of clarity -- we will fix it!
> If possible, could you run experiments where the transformer predicts the output while attending only to the hash prefix? This experiment would help clarify why the hash prefix enhances performance in transformer models while having minimal impact on diffusion models.
If we understand correctly, we believe that one of our experimental setups already does this: specifically, teacherless training with hash-conditioning. Here, the model attends only to the hash prefix and some dummy tokens. We are curious if you have thoughts about the implications of this!
> I also attempted to review Appendix D to find an example of the data, but it was not provided. It would be better to include an example for each task
Thanks for the feedback! We will improve the clarity of the task data with more illustrated examples, but for now, we note that we provide one example for each task in Figure 1&2, and **the full description of dataset generation** in Appendix D.
---
Rebuttal Comment 1.1:
Comment: Yes, your hashing-conditioning method is indeed novel. Regarding the additional experiment, I had assumed that even in teacherless training with hash-conditioning, the output tokens attend both the input graph and the hash prefix. I was just curious whether the transformer would still perform well if the output tokens attended only to the hash prefix. Please let me know if I am wrong. Regarding the last question, I was simply interested in seeing how the actual input and output are represented in text for each task.
Overall, the paper is well-organized, and the experiments are conducted appropriately. Since my questions are well addressed, I increase my score from 3 to 4.
Update: Thank you for the illustration!
---
Reply to Comment 1.1.1:
Comment: We appreciate your updated assessment of our paper!
Regarding the dataset examples: we provided a graphical illustration of our datasets at this anonymous link: https://gist.github.com/lm-creativity-25/997631eae2237c09cbf9dfe3d41e9e00 The exact details (e.g., tokenization) are provided in the appendix. Also, for Sibling and Triangle, the graph is in-weights rather than in-context – mirroring how pretraining data is derived from the underlying real-world knowledge. | Summary: This paper focuses on addressing the issue of insufficient creativity in traditional next-token prediction (NTP) when applied to open-ended algorithmic tasks. The authors have designed a series of minimalistic algorithmic tasks (such as Sibling Discovery, Triangle Discovery, Circle Construction, and Line Construction) to simulate real-world scenarios that require creative thinking. To this end, the paper proposes a multi-token prediction approach, primarily realized through Teacherless Training and Discrete Diffusion Models, and introduces a Hash-Conditioning strategy to enhance the diversity of generation. Experimental results demonstrate that, compared to traditional NTP methods, the multi-token approach exhibits significant advantages in boosting creativity and reducing the model's tendency to memorize and reiterate.
Claims And Evidence: Yes
Methods And Evaluation Criteria: make sense
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Codes in the zip file and Appendix
Relation To Broader Scientific Literature: Key contributions of the paper related to the topic of Multi-token prediction of LLM and Next-token prediction.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strength:
(1) The investigation into the issues surrounding NTP holds substantial theoretical significance and practical application prospects. (2) Controlled experimental design: By devising simplified algorithmic tasks, the study has clearly quantified two metrics—creativity and memorized repetition—thereby facilitating the demonstration of the method's advantages.
Weakness:
(1) The mechanism underlying the role of hash-conditioning has not been clearly elucidated. (2) Although the minimalistic tasks in the paper facilitate quantification and analysis, there exists a gap between these tasks and real-world applications, casting doubt on their generalizability. (3) The performance of large-scale NTP models (only Gemma-2b and GPT2-86M were used in the study) in this context, as well as the effectiveness of reasoning enhancement methods on these tasks, has not been evaluated.
Other Comments Or Suggestions: Suggestions:
(1) Expand the scope of experiments to other practical tasks to assess the generalizability of the proposed method. (2) Incorporate additional evaluation metrics to measure creativity, and conduct human studies to evaluate the effectiveness. (3) Provide a more detailed demonstration of how the global perception capability of MTP (Multi-Token Prediction) contributes to the task at hand, supported by use case studies.
Questions For Authors: See **Suggestions**
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your time & insightful feedback! We address your points below:
### Major points
> The mechanism underlying the role of hash-conditioning has not been clearly elucidated
Here's our current intuition which we'll add to the paper.
Recall that hash-conditioning explicitly injects randomness in the input; in softmax sampling, randomness is elicited from model output. **Hash-conditioning may then help via two mechanisms:**
- **More representational power:** In softmax sampling, if the model has to maximize diversity, the model must maintain multiple thoughts in its representations so that one of them can be sampled at output. Hash-conditioning allows the model to fixate on just “one thought for a given hash”. This way the model can carefully represent one thought while being able to produce diverse thoughts for different hashes.
- **Better planning:** Fixing randomness ahead of time allows the model to plan and co-ordinate multiple random decisions in tandem.
We emphasize that hash-conditioning is a novel approach proposed as part of our broader exploration of creativity (as Reviewer `Lrgc` and `16p5` remark as very interesting). **Thus,
we invite you to view hash-conditioning as a novel algorithmic contribution (in a larger paper) with reasonable intuition behind it,** opening up interesting future directions (such as verifying the above mechanisms & generalizing it).
> Although the minimalistic tasks in the paper facilitate quantification and analysis, there exists a gap between these tasks and real-world applications, casting doubt on their generalizability
It is absolutely right to acknowledge the gap between our very simple tasks and ambitious real-world tasks.
As the first step, we provided summarization as a practical example demonstrating real-world applicability.
However, the challenge with generalizing real-world tasks is that there is no pre-existing benchmark where creativity, originality and diversity are even quantifiable e.g., it’s **impossible** to judge originality when the dataset is the whole of the internet. This would require a non-trivial effort that is worth a few other papers’ work!
Regardless, we’re happy that you (and also the other reviewers) acknowledge the value in studying our algorithmic tasks: they help us cleanly measure creativity/diversity/originality and analyze how different training methods can affect creativity.
> The performance of large-scale NTP models (only Gemma-2b and GPT2-86M were used in the study) in this context,..., has not been evaluated.
We agree that an ideal study would study a range of larger models.
- But it’s worth noting that, **for our very minimal dataset sizes/complexities, the 2B scale is very, very large!** (Indeed, Reviewer Lrgc notes that these models are “reasonable”).
- Additionally, we have observed limited absolute performance gain with increasing model size.
- **We have new experimental results that reproduce the benefit of diffusion models over NTP at the 400M parameter scale (the largest open-sourced one we can find)**. We will add detailed scaling plots for each task in the final version of the paper.
> as well as the effectiveness of reasoning enhancement methods on these tasks
This is a completely valid question that we are curious about too. However, there is doubt as to whether “reasoning enhancement” methods would translate to “creativity enhancement". This is because we not only care about “correctness/coherence of planning/reasoning”, but the “coherence **+ originality + diversity** of a plan”. We kindly refer you to **our response to Reviewer Lrgc (first quoted question)** where we provide three arguments substantiating our point. These are profound questions that require multiple papers of future research.
---
### Other points
> Incorporate additional evaluation metrics... human studies...; demonstration of how the global perception capability of MTP (Multi-Token Prediction) contributes to the task... use case studies.
Thank you for these wonderful suggestions!
Evaluation of creativity in the wild is hard and requires careful consideration worth the effort of a whole new paper.
The current creativity metric we chose, Self-BLEU, is widely used as a diversity metric in language generation benchmarks [1]. It quantifies diversity by measuring BLEU scores across outputs generated by the same model, thus showing the distinctiveness among outputs. We are also exploring other diversity metrics (such as distinct n-gram).
Human evaluations are beyond our current scope & expertise but remain a key future direction for the community.
We will certainly keep these in mind for future follow-ups!
[1] Texygen: A Benchmarking Platform for Text Generation Models
---
**We sincerely hope we've addressed your key concerns (model size & other reasoning methods) in a way that allows you to re-evaluate the paper in more positive light, thank you!** | Summary: In this work, the authors aim to study the failure of the next-token prediction (NTP) objective at open-ended creative tasks, where the goal is to generate a diversity of outputs satisfying some constraint. Motivated by the fact that many such tasks require learning to form a latent plan that is not captured from learning the distribution one token at a time, they hypothesize that multi-token prediction (MTP) objectives would be more suited to open-ended learning.
Motivated by work in the creativity cognitive science literature, the authors design a set of 4 algorithmic tasks whose distribution they aim to model. They define the creativity score as the number of unique non-memorized outputs that satisfy the task-dependent constraint.
The authors consider 2 MTP objectives from the literature for evaluation against NTP on these tasks: teacherless training and discrete diffusion. They consider 2 sets of models: Gemma 2b v1 models for NTP vs teacherless training, and an additional ~90m parameter setup for NTP vs Diffusion (and teacherless).
The authors find that across tasks and models, NTP leads to low creativity and high memorization, whereas MTP objectives consistently exhibit higher creativity score. The authors additionally find that a novel training method (hash-conditioning), wherein a model is trained with the hash of the sequence it is supposed to model as a prefix and at inference conditioned on a novel hash, consistently leads to higher creativity, even in the NTP setting.
Claims And Evidence: All claims in the paper are supported by adequate evidence.
Methods And Evaluation Criteria: The paper uses a set of simple algorithmic tasks to test for the failure cases of the NTP objective. While I am usually quite wary of toy tasks, here its actually quite useful to have a simplified setting. This allows the authors to measure creativity where, in natural language scenarios, this would be challenging (exact uniqueness would not work because of superficial difference, string distance would not capture conceptual differences, and metrics based on embedding distances are less interpretable and sensitive to choice of representation).
I mostly agree with the authors that this task is a good failure test of NTP prediction: models failing on these tasks at a small scale can reasonably be expected to fail at larger scale. A counterargument might be that large-scale models exhibit more capabilities than small-scale ones, but the experiments at 2b scale with pretrained models somewhat mitigate this concern. A second counterargument might be the existence of chain-of-thought capabilities in large models: using such chains of thought the model might be able to capture the latent plan needed to produce creative outputs. This is shortly discussed in the appendix of the paper and warrants its own followup investigation, nevertheless I think it is outside the scope of the current paper.
Theoretical Claims: No theoretical proofs in this paper.
Experimental Designs Or Analyses: The models used were reasonable, and all experiments with the 2b model were performed 4 times with variation plotted in the charts. The appendix contains detailed hyperparameter studies.
Supplementary Material: I mostly reviewed the Limitations, Remarks and Discussion and Additional Related Work section, while looking shortly at the supplementary experiments.
Relation To Broader Scientific Literature: The relation to the broader literature is excellent, with the main paper detailing the closest works with algorithmic tasks, and the supplementary related work section delving in studies of LLM creativity in more realistic settings (notably creative writing) as well as multi-token objective in contemporary large-scale training (such as the work of Gloeckle et al 24 or the training of DeepSeek V3). The relationship between the teacherless objective and these large-scale efforts might be worth discussing in the main text.
The reference of the classical work of Boden as an inspiration for the tasks is most welcome.
Essential References Not Discussed: None found.
Other Strengths And Weaknesses: * This paper is exceptionally well-written, and the text addresses most questions that come up while reading;
* This paper is very timely, given the growth in interest in MTP methods recently;
* The tasks are quite elegant models of creative tasks and the relationship to combinatorial and exploratory creativity is well-drawn;
* The larger-scale tasks might require more explanation and exposition (why Self-Bleu as an evaluation metric?) but unfortunately space is lacking. This could make for a good follow-up study.
* The result on hash-conditioning is very interesting and should be validated with larger-scale empirical work; how usable is it in practice?
* Related work is extensive and presents an overview of the creativity literature, the MTP literature, the literature on strengths and weaknesses of NTP, and the theoretical literature on creativity in models.
Other Comments Or Suggestions: * Sec 3.1: typo “from an prompt-free autoregressive” -> “ from a prompt-free autoregressive”
* Sec 5: typo “We defer discussion of theoretical studis of diversity”
Questions For Authors: Question 1: Fig 5 Temperature 2.0 might be too much for Gemma, did you try lower temperatures (1.0)?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you for your detailed and encouraging feedback. We are happy that you find our approach elegant and timely given how it is near-impossible to rigorously measure creativity in the real-world. We are also pleased that you find (a) our proposed hash-conditioning approach interesting (b) our claims well-supported, the model sizes reasonable and (c) our related work discussion extensive.
### Reg. CoT, RL etc.,
> A second counterargument: using chains of thought… to capture the latent plan needed to produce creative outputs
We _completely_ agree with your counterargument (and glad you acknowledge this to be out of scope of this paper!).
Nevertheless, there are some profound doubts that arise when considering CoT in creative planning tasks which differ from standard reasoning tasks:
1. **Current paradigms like CoT, prompting, RL, instruct-tuning -- which are evaluated on math/coding tasks -- are purely designed for one goal: *coherence/correctness of a single, generated plan.*** None are explicitly designed for (i) diversity across various generations and (ii) originality of a generation compared against the training set. Importantly, it is highly unclear how to redesign these paradigms (the rewards, the prompts etc.,) to optimize originality or diversity!
- Arguably, the “SFT on iid samples from a distribution” setup — despite being less fancy — seems to be the most natural way to teach a model to produce diverse & original outputs.
2. **Human-generated data rarely contains the explicit, step-by-step trace behind creative thought.** For example, authors rarely document their internal thought processes while conceptualizing creative works like a research paper or a clever olympiad problem. **Given that the creative process is highly complex compared to standard CoT use-cases,** (see below!) to what extent can a model produce such traces with originality & diversity?
3. Even if one may hope such CoT emerges, **fundamentally, it is unclear if CoT traces even exist for many creative tasks.**
- For instance, in the triangle discovery task, is there a CoT-style approach for maximizing creativity? Would that involve laboriously enumerating all triplets in the graph as CoT and filtering them out? Is such an approach even scalable in real-world tasks? What would the CoT trace for generating all possible original, and diverse completions for “a horse walked into the bar” look like?
- In contrast, we speculate that in many real-world tasks, creative _leaps_-of-thought come from "quick, internal, latent heuristic leaps" that do _not_ occur in token space. (Our non-CoT setup seems better suited for modeling this.)
Evidently, these are profound questions that future research must tackle.
---
> The larger-scale tasks might require more explanation and exposition (why Self-Bleu as an evaluation metric?) but unfortunately space is lacking.
We will improve this discussion! We choose Self-BLEU as it’s widely used as a diversity metric in language generation [1]. It measures BLEU scores across outputs generated by the same model to measure distinctiveness among outputs.
A full fledged future investigation would certainly strengthen our findings.
[1] Texygen: A Benchmarking Platform for Text Generation Models
> The result on hash-conditioning is very interesting and should be validated with larger-scale empirical work; how usable is it in practice?
Absolutely — validating hash-conditioning at scale would have great impact! **Currently, the roadblock is large-scale evaluation: we need _open-ended_ benchmarks where diversity & originality can be precisely measured alongside correctness/coherence.** While there are evaluations of creativity like in [2] [3] [4] [5], unfortunately, these either use debatable proxies (e.g., n-grams, sentence embeddings) or only measure the quality of a standalone generation without grounding to the training data - none of them are accurately evaluating originality and creativity.
As for the algorithm itself, there are obvious ways to extend hash-conditioning to practice e.g., allocate a reserve of “hash-tokens”, and pretrain/finetune with such hash-tokens inserted in the model. We hope that our paper can inspire an active exploration of such ideas & also settle the benchmark challenge above.
[2] Hu et al., The Belief State Transformer
[3] Lu et al., Quantifying linguistic creativity of language models via systematic attri-bution of machine text against web text
[4] Peeperkorn et al., Is temperature the creativity parameter of large language models?
[5] Si et al., Can LLMs generate novel research ideas? A large-scale human study with 100+NLP researchers
> Fig 5 Temperature 2.0 might be too much for Gemma, did you try lower temperatures (1.0)?
Good point; we in fact started with lower temperatures which only performs equivalently if not worse in terms of creativity, which is why we moved to exploring larger temperatures. We will make sure to add. | null | null | null | null | null | null | null | null |
Overcoming Non-monotonicity in Transducer-based Streaming Generation | Accept (poster) | Summary: * The paper introduces MonoAttn-Transducer, which enhances the handling of non-monotonic alignments in streaming generation by incorporating monotonic attention.
* The approach leverages the forward-backward algorithm to infer posterior alignment probabilities, enabling efficient training without enumerating exponentially large alignment spaces.
* Extensive experiments demonstrate that the proposed method significantly improves generation quality while maintaining comparable latency to baseline Transducer models.
Claims And Evidence: * The effectiveness of MonoAttn-Transducer in handling non-monotonic alignments is demonstrated through experiments on speech-to-text and speech-to-speech simultaneous translation tasks.
* The paper provides detailed experimental results, including BLEU scores, COMET scores, and latency metrics (Average Lagging), showing consistent improvements over baseline Transducer models across various chunk size settings.
* Ablation studies and comparisons with prior alignment methods highlight the importance of learning monotonic attention through posterior alignment.
Methods And Evaluation Criteria: * The integration of monotonic attention into the Transducer architecture addresses the specific challenge of input-synchronous decoding limitations.
* The use of the forward-backward algorithm for posterior alignment inference maintains computational efficiency while enabling effective training.
* Evaluation on standard benchmark datasets (MuST-C, CVSS-C) and standard metrics (BLEU, COMET, Average Lagging) ensures that the results are comparable and relevant to the streaming generation research community.
Theoretical Claims: The paper does not include formal proofs for theoretical claims. However, the theoretical foundations of the proposed method are based on established concepts:
* The use of the forward-backward algorithm for computing posterior probabilities is a well-known technique in hidden Markov models and sequence transduction.
* The formulation of monotonic attention is consistent with previous work in neural machine translation and speech recognition.
Experimental Designs Or Analyses: * The experiments include comprehensive comparisons with baseline models and state-of-the-art approaches.
* Results across different latency conditions (chunk sizes) provide insight into the flexibility and robustness of MonoAttn-Transducer.
* The evaluation on multiple datasets (MuST-C, CVSS-C) and different language pairs (En→De, En→Es) demonstrates the generalizability of the proposed approach.
Supplementary Material: This paper has no supplementary material.
Relation To Broader Scientific Literature: * The paper builds upon previous work in Transducer models and monotonic attention, addressing their limitations in handling non-monotonic alignments.
* It contributes to the ongoing effort to improve the efficiency and effectiveness of streaming generation models, which is crucial for real-time applications such as simultaneous translation.
* The proposed method complements other approaches, such as attention-based encoder-decoder models and non-autoregressive models, offering a robust solution for complex streaming generation tasks.
Essential References Not Discussed: Since I am not familiar with the relevant literature, I cannot be sure.
Other Strengths And Weaknesses: The paper's main strength is its innovative approach to overcoming non-monotonicity in Transducer-based streaming generation by tightly integrating the predictor states with the source history through monotonic attention. This is a significant advancement as it improves the model's ability to handle complex tasks requiring non-monotonic alignments without a notable increase in latency. Furthermore, the method maintains the same time and space complexity as the baseline Transducer model, making it efficient and scalable.
However, the paper could be improved by providing more detailed theoretical analysis of why the proposed method works well, especially in comparison to other methods. Additionally, while the experimental results are convincing, more comprehensive ablation studies could strengthen the paper's claims.
Other Comments Or Suggestions: * In some places, the text is a bit dense and could benefit from additional figures or diagrams to illustrate complex concepts.
* The discussion of related work could be more detailed, especially regarding how the proposed method differs from and improves upon existing approaches.
Questions For Authors: * How does the performance of MonoAttn-Transducer scale with larger datasets and more complex language pairs? (If the authors can provide additional experimental results or analysis on this, it would strengthen the paper's claims about the method's significance and applicability.)
* Could the proposed method be applied to other types of streaming generation tasks beyond speech translation, such as real-time text summarization or dialogue generation?
* What are the limitations of the current implementation, and what are the plans for future work to address these limitations?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Thank you for your thoughtful review! We will make every effort to respond to your concerns.**
>***1. However, the paper could be improved by providing more detailed theoretical analysis of why the proposed method works well, especially in comparison to other methods. Additionally, while the experimental results are convincing, more comprehensive ablation studies could strengthen the paper's claims.***
Thank you for your suggestion. In the rebuttal period, we have designed an additional analysis to validate the rationale of our approach.
The core technical aspect of this paper is dynamically computing the expected contextualized representation of monotonic attention through inference of the posterior alignment during training. This posterior alignment requires a prior alignment for calculation. **Naturally, there are two potential questions here: 1) Is posterior alignment absolutely necessary, or can the expected contextualized representation be computed directly using a prior alignment? 2) How does the choice of prior alignment affect the result of the posterior alignment, and is it robust?**
For the first question, we attempted to directly calculate the expected contextualized representation using the prior probability, without inferring the posterior alignment. In Table 2, we present the results obtained by using a diagonal prior for this direct calculation. We found that this approach leads to a significant performance drop, particularly when the chunk size is small, where finer alignment is required. This addresses the first question. **Here, we will focus on the second question: the robustness of the posterior alignment.** We examine the impact of different prior choices: diagonal prior and uniform prior.
| | **Chunk Size ($ms$)** | 320 | 640 | 960 | 1280 |
|---------------------------|-----------------------|-------|-------|-------|-------|
| **$p^{\mathrm{dia}}$** | **BLEU** | 24.72 | 26.74 | 27.05 | 27.41 |
| | **AL ($ms$)**| 997 | 1239 | 1606 | 1991 |
| **$p^{\mathrm{uni}}$** | **BLEU** | 24.89 | 26.68 | 27.26 | 27.11 |
| | **AL ($ms$)**| 993 | 1249 | 1601 | 1983 |
As shown, MonoAttn-Transducer's performance demonstrates robustness to the choice of prior alignment. **We visualize the posterior alignment when using different priors in this [annoymous URL](https://github.com/RandomUsername192/AnnoymousMaterials/blob/main/Comparison_cut.pdf). We have observed that, even with significant differences in the prior distribution, the posterior remains fairly robust.** This nice property reinforces the robustness of using the inferred posterior to train monotonic attention.
>***2.Could the proposed method be applied to other types of streaming generation tasks beyond speech translation, such as real-time text summarization or dialogue generation?***
Yes, the proposed method is a general framework that can be applied to many streaming generation task, as long as the target sequence consists of discrete tokens. We note a recent trend in research indicating that audio, images, and even video data can be tokenized into discrete sequences with minimal information loss [1,2]. Therefore, we believe this research has broad applicability to various real-time streaming tasks.
[1] SpeechTokenizer: Unified Speech Tokenizer for Speech Large Language Models
[2] Finite Scalar Quantization: VQ-VAE Made Simple
>***3.How does the performance of MonoAttn-Transducer scale with larger datasets and more complex language pairs?***
Thank you for your suggestion. We are currently working on scaling our approach with a larger dataset (GigaST) to further evaluate its performance. However, due to time constraints during the rebuttal period, the experiment has not been completed. We plan to include the results in the revised version.
>***4.What are the limitations of the current implementation, and what are the plans for future work to address these limitations?***
Currently, we have implemented the forward-backward algorithm and posterior inference using ***Numba***. However, we have observed that GPU usage during training occasionally is around 60%. We suspect that some inefficient operators in our implementation may be contributing to this. We believe that optimizing the implementation may benefit better scaling this approach. | Summary: This paper investigates an interesting problem of non-monotonic alignment between input/output in streaming generation settings (e.g., simultaneous translation). The solution is to use the forward-backward algorithm to estimate alignment. Results show superior performance on speech-to-text and speech-to-speech simultaneous translation tasks compared with baseline techniques.
## Update after rebuttal
While I appreciate the contributions in this work, I share ntmp zJy7's concerns over the clarity of the methodology description. My question and the author's response made me realize that important pieces of technical details are missing. I would still like to keep my score, but would not argue for a accept for this reason.
Claims And Evidence: Yes, the claims are reasonably supported by experimental evidence.
Methods And Evaluation Criteria: Yes. The evaluation focuses on average latency, and standard machine translation metrics (BLEU, COMET), which seems befitting to the problem of speech to speech/text translation.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes, the experimental design is reasonable. There is a small concern that this technique uses more compute than baseline (30% more), but I think this is largely acceptable given the notable improvements from baseline.
Results analysis also seems reasonable.
Supplementary Material: No, the paper is self-contained.
Relation To Broader Scientific Literature: This paper makes several connections to the brader literature.
- Extensive disucssion of transducer and its related work in Section 4.
- Extensive discussion of related prior arts in Section 5.3, including Wait-k, RealTrans, etc...
Essential References Not Discussed: Would recommend adding a reference for forward/backward algorithm.
Other Strengths And Weaknesses: This paper investigates a less-studied problem of non-monotonic alignment, which is a true problem in real-world applications. For the same reason, the paper is limited in that its impact is limited to these specific areas of real world problems.
Other Comments Or Suggestions: N/A
Questions For Authors: For Table 2., why would MonoAttn still out-perform Transducer with an infinite chunk size? I would anticipate that in such settings, alignment is not as important, as the context includes everything that may be relevant for genderating output?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Thank you for your acknowledgement of this work. We will make every effort to address your remaining concerns.**
>***1. For Table 2., why would MonoAttn still out-perform Transducer with an infinite chunk size? I would anticipate that in such settings, alignment is not as important, as the context includes everything that may be relevant for genderating output?***
Yes, when the chunk size is infinite, the posterior alignment would be that all predictor states align with the last encoder state with a probability of 1. This means that the context in the cross-attention would encompass the entire source. However, it is important to note that the naive Transducer does not include such an attention mechanism. Instead, its encoder and predictor are loosely coupled through a joiner (typically a simple MLP), which may limit its modeling capacity compared to our approach.
>***2.This paper investigates a less-studied problem of non-monotonic alignment, which is a true problem in real-world applications. For the same reason, the paper is limited in that its impact is limited to these specific areas of real world problems.***
Yes, the scope of this paper is limited to streaming sequence generation scenarios, specifically where the target sequence consists of discrete tokens. Given that an increasing number of studies have pointed out that data from audio, images and video modalities can be tokenized into discrete sequences in a manner similar to text [1,2], we believe this research can be widely applied to various real-time streaming tasks.
[1] SpeechTokenizer: Unified Speech Tokenizer for Speech Large Language Models
[2] Finite Scalar Quantization: VQ-VAE Made Simple
>***3.Would recommend adding a reference for forward/backward algorithm.***
Thank you for your suggestion! We will fix it. | Summary: This paper introduces MonoAttn-Transducer to tackle streaming generation tasks. MonoAttn-Transducer adds monotonic attention mechanism on top of Transducer and uses forward-backward algorithm to infer the alignment, which is then used to compute expected context representations used in monotonic attention. Experiments on simultaneous speech-to-text/speech translation (MuST-C and CVSS-C datasets) show that MonoAttn-Transducer performs better than vanilla Transducer when the chunk size of speech encoder is no less than 640ms.
## update after rebuttal
The authors have addressed my concerns regarding LAAL and the prior distribution. However, while they claim to have tested AlignAtt, no results were provided in the rebuttal. Additionally, I share Reviewer zJy7’s concern about the quality of the writing. Therefore, I will maintain my current score.
Claims And Evidence: > Claim 1: vanilla Transducer have limited ability to attend to the input stream history during decoding, making it hard to manage reorderings. MonoAttn-Transducer manages re-ordering better with explicit monotonic attention.
This claim is true when the chunk size is large or the there is no more than 1 word reorder as shown in Figure 2. It is not true when the chunk size is small and there are more than 1 word reorders.
> Claim 2: an efficient training algorithm is proposed to avoid direct handling exponentially large number of translation trajectories.
This claim is true as shown in Table 2 and Section 6.2.
Methods And Evaluation Criteria: The method itself looks complicated at first glance, but the intuition behind is clear. One concern is equation (11). It is possible to obtain a better prior by either leveraging the confidence of pretrained MT models or word alignment methods as in [1]. Another concern is that the expected contextual representations still leads to training-inference mismatch.
The average lagging (AL) evaluation metric is problematic, since AL is not reliable if the over-generation happens. It is better to use length adaptive average lagging (LAAL) [2] as in recent IWSLT workshops [3].
[1] Wang, M., Vu, T. T., Wang, Y., Shareghi, E., & Haffari, G. (2024). Conversational simulmt: Efficient simultaneous translation with large language models. arXiv preprint arXiv:2402.10552.
[2] Sara Papi, Marco Gaido, Matteo Negri, and Marco Turchi. 2022. Over-Generation Cannot Be Rewarded: Length-Adaptive Average Lagging for Simultaneous Speech Translation. In Proceedings of the Third Workshop on Automatic Simultaneous Translation, pages 12–17, Online. Association for Computational Linguistics.
[3] Ibrahim Said Ahmad, Antonios Anastasopoulos, Ondřej Bojar, Claudia Borg, Marine Carpuat, Roldano Cattoni, Mauro Cettolo, William Chen, Qianqian Dong, Marcello Federico, Barry Haddow, Dávid Javorský, Mateusz Krubiński, Tsz Kin Lam, Xutai Ma, Prashant Mathur, Evgeny Matusov, Chandresh Maurya, John McCrae, Kenton Murray, Satoshi Nakamura, Matteo Negri, Jan Niehues, Xing Niu, Atul Kr. Ojha, John Ortega, Sara Papi, Peter Polák, Adam Pospíšil, Pavel Pecina, Elizabeth Salesky, Nivedita Sethiya, Balaram Sarkar, Jiatong Shi, Claytone Sikasote, Matthias Sperber, Sebastian Stüker, Katsuhito Sudoh, Brian Thompson, Alex Waibel, Shinji Watanabe, Patrick Wilken, Petr Zemánek, and Rodolfo Zevallos. 2024. FINDINGS OF THE IWSLT 2024 EVALUATION CAMPAIGN. In Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024), pages 1–11, Bangkok, Thailand (in-person and online). Association for Computational Linguistics.
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: Issues:
1. The author misses one important baseline AlignAtt [1], which is far better than EDAtt compared in Figure 2.
2. The AL in Table 3 is abnormal, both 118 and 153 ms are abnormally low, which I find hard to believe. One possible cause is over-generation.
[1] Papi, S., Turchi, M., Negri, M. (2023) AlignAtt: Using Attention-based Audio-Translation Alignments as a Guide for Simultaneous Speech Translation. Proc. Interspeech 2023, 3974-3978, doi: 10.21437/Interspeech.2023-170
Supplementary Material: No.
Relation To Broader Scientific Literature: The primary contribution of this work is the integration of monotonic attention [1] into the Transducer framework, which overcomes the limitations of the standard Transducer in handling the word reordering challenges inherent in simultaneous translation.
[1] Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, Chung-Cheng Chiu, Semih Yavuz, Ruoming Pang, Wei Li, and Colin Raffel. 2019. Monotonic Infinite Lookback Attention for Simultaneous Machine Translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1313–1323, Florence, Italy. Association for Computational Linguistics.
Essential References Not Discussed: The paper claims to compare with previous state-of-the-art methods, but missing AlignAtt [1].
[1] Papi, S., Turchi, M., Negri, M. (2023) AlignAtt: Using Attention-based Audio-Translation Alignments as a Guide for Simultaneous Speech Translation. Proc. Interspeech 2023, 3974-3978, doi: 10.21437/Interspeech.2023-170
Other Strengths And Weaknesses: No.
Other Comments Or Suggestions: When you mention the energy $e_{u,t}$, it is better to describe the meaning of it. Otherwise, it will be hard to understand for audience who did not read the monotonic attention paper.
Questions For Authors: 1. Could you please report the results with LAAL and include AlignAtt as the baseline? It will be interesting to see how MonoAttn-Transducer performs compared with a strong baseline under a more robust latency metric.
2. Could you please also report speech offset for simultaneous speech-to-speech translation?
3. Can you elaborate on why MonoAttn-Transducer does not improve over Transducer at chunk size of 320ms?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Thank you for your thoughtful review! We will make every effort to respond to your concerns.**
>***1. Concerns regarding latency metric that are biased toward over-generation:*** *The AL in Table 3 is abnormal, both 118 and 153 ms are abnormally low, which I find hard to believe. One possible cause is over-generation. The average lagging (AL) evaluation metric is problematic, since AL is not reliable if the over-generation happens. It is better to use length adaptive average lagging (LAAL).*
Thank you for your suggestion. Actually, we present the S2T results using LAAL in Table 6, App. C. Apologies for the confusion. For clarity, we have included the results from Table 6 here and added the S2S results using LAAL and Start Offset.
**L: EN-ES R: EN-DE**
| | **Chunk Size (ms)** | 320 | 640 | 960 | 1280 | 320 | 640 | 960 | 1280 |
|-|-|-|-|-|-|-|-|-|-|
| **Transducer** | **LAAL (ms)**|1168|1466|1847|2220|1258|1563|1942|2312|
| |**LAAL_CA (ms)**|1381|1589|1944|2300|1444|1673|2028|2389|
|**MonoAttn-Transducer**|**LAAL (ms)**|1230|1475|1837|2204|1317| 1582|1957|2305|
| |**LAAL_CA (ms)**|1453|1607|1945|2295|1501|1702|2056|2387|
**FR-EN S2S**
||**Chunk Size (ms)**|320|
|-|-|-|
| **Transducer**| **AL (ms)**|153|
||**LAAL (ms)**|984|
||**Start Offset (ms)**|1520|
| **MonoAttn-Transducer**|**AL (ms)**|118|
||**LAAL (ms)**|918|
||**Start Offset (ms)**|1491|
**We have realized that LAAL is a more robust metric for assessing latency (especially in S2S), and we will replace the metrics in the main body of the paper with LAAL for clearer presentation.**
>***2.One concern is equation (11). It is possible to obtain a better prior by either leveraging the confidence of pretrained MT models or word alignment methods. Another concern is that the expected contextual representations still leads to training-inference mismatch.***
Leveraging the confidence of pretrained MT models or word alignment methods for prior design is a possible approach. However, it will further complicate the training pipeline. In practice, we find that the posterior alignment is relatively robust to the choice of prior. Specifically, we examine the impact of different prior choices: diagonal prior and uniform prior.
| | **Chunk Size ($ms$)** | 320 | 640 | 960 | 1280 |
|---------------------------|-----------------------|-------|-------|-------|-------|
| **$p^{\mathrm{dia}}$** | **BLEU** | 24.72 | 26.74 | 27.05 | 27.41 |
| | **AL ($ms$)**| 997 | 1239 | 1606 | 1991 |
| **$p^{\mathrm{uni}}$** | **BLEU** | 24.89 | 26.68 | 27.26 | 27.11 |
| | **AL ($ms$)**| 993 | 1249 | 1601 | 1983 |
As shown, MonoAttn-Transducer's performance demonstrates robustness to the choice of prior alignment. **We visualize the posterior alignment when using different priors in this [annoymous URL](https://github.com/RandomUsername192/AnnoymousMaterials/blob/main/Comparison_cut.pdf). We have observed that, even with significant differences in the prior distribution, the posterior remains fairly robust.** This nice property relieves concerns about the imperfect design of the prior.
>***3.The author misses one important baseline AlignAtt [1], which is far better than EDAtt compared in Figure 2.***
Thank you for the reminder. I have checked AlignAtt's paper and found that it performs particularly well at medium-to-high level latency. We will include it in Fig. 2 to facilitate a comparison between different streaming generation methods.
>***4.Can you elaborate on why MonoAttn-Transducer does not improve over Transducer at chunk size of 320ms?***
On one hand, when the chunk size is 320ms, the Transducer's prediction window is very small, which limits its flexibility in managing reordering. On the other hand, we found that the small improvement observed with a 320ms chunk size is mainly reflected when using BLEU as a metric. However, results on COMET show that MonoAttn-transducer still demonstrates significant improvement.
>***5.Could you please also report speech offset for simultaneous speech-to-speech translation?***
Thank you for your suggestion. We have incorporated the results into the table above. | Summary: This paper proposes Mono-Attn-Transducer, a streaming sequence model that combines Transducer and monotonic attention. A novel training procedure that utilizes approximate alignment posteriors and alignment priors made training possible without expensive enumeration over an exponential search space or the use of a latency loss. Experimental results demonstrated strong performance on standard speech-to-text and speech-to-speech translation tasks.
## Update after rebuttal
Apologies for the delayed response. I have been sick the past 2 weeks.
Thanks for clarifying on the CAAT results! That addresses one of my main concerns with this paper. However, I am afraid the other concern (unclear presentation of the model architecture) is still valid. Without major revisions to the current draft, it would be very difficult for readers to grasp even just the high level architecture without looking at the code. For that reason, I could only raise my score to 2.
Claims And Evidence: The main claim of this paper is the strong performance of the proposed method for streaming speech-to-text and speech-to-speech translation. To support this claim, this paper presents
- Results on streaming speech-to-text translation in Table 2 and Figure 2
- Results on streaming speech-to-speech translation in Table 3
The results were on standard datasets on these tasks, enabling comparison across a wide array of existing results.
However, I have a few concerns:
- The results of CAAT in Figure 2 are quite a bit worse than what's reported in [Liu et al, 2021]. There, Tables 5 & 6 report the following BLEU/AL for En-De and En-Es for CAAT:
|En-De|AL(ms)|BLEU|
|-|-|-|
||508.1|20.5|
||813.8|21.4|
||1114.9|21.8|
||1443.4|22.2|
||1800.6|22.4|
||2137.8|22.6|
||Offline|23.2|
|En-Es|AL(ms)|BLEU|
|-|-|-|
||355.9|24.0|
||623.2|25.8|
||955.9|26.3|
||1275.9|26.4|
||1647.7|26.6|
||1977.3|27.1|
||Offline|27.5|
It's unclear whether the CAAT results in Figure 2 of this paper were reproduced by the authors themselves. What were reported in [Liu et al, 2021] are significantly better and much closer to results of Mono-Attn-Transducer in Table 2. If I am not mistaken, [Liu et al, 2021] used both smaller models and a smaller right context in their experiments. Thus a clarification on the discrepancy between the reported results would be extremely important for more reliably assessment on the actual contribution of Mono-Attn-Transducer.
- The speech-to-speech translation results were only compared against the authors' own offline model, not even against [Zhao et al, 2024] which was cited in Section 5.4 (which reported much higher offline BLEUs). Admittedly this paper reported BLEUs under much lower latency than [Zhao et al, 2024], it would useful if more comparable results can be reported.
[Liu et al, 2021]: https://aclanthology.org/2021.emnlp-main.4.pdf
[Zhao et al, 2024]: https://arxiv.org/pdf/2410.03298
Methods And Evaluation Criteria: The proposed Mono-Attn-Transducer appears to be a sensible solution to streaming sequence prediction problems. However I would need some more information from the authors before I could make a good assessment, because several key details of the proposed method appear to be have been left out:
- There is not a clear description of the overall architecture of Mono-Attn-Transducer. From the context, it appears that the overall architecture of Mono-Attn-Transducer is very similar to TAED in [Tang et al, 2023] except that previous decoder states are not updated when new source features become available. This however is only my best guess.
- According to Algorithm 1, the training loss is the negative log-likelihood (from Equation (3)) based on $c_u$. $c_u$ is the approximate average cross attention output based on my interpretation of Equation (8). However $c_u$ depends on $s_u$, the predictor state which in turn depends on the actual alignment path $g(\cdot)$. It is not clear to me which alignment path is used to produce $s_u$.
- Further, it is not clear to me how $c_u$ is used. My guess is that it's passed to the joiner to join with each $h_t$ to produce the Transducer sum of alignmenth path probabilities.
In summary, I am not confident that anyone with the information currently available in this paper can reproduce Mono-Attn-Transducer.
[Tang et al, 2023]: https://aclanthology.org/2023.acl-long.695.pdf
Theoretical Claims: There are no theoretical claims in this paper.
Experimental Designs Or Analyses: The experimental designs are sound and widely accepted practice.
Supplementary Material: There is no supplementary material.
Relation To Broader Scientific Literature: This paper is a continuation in the exploration of enhancing a Transducer model with cross attention in streaming sequence prediction. CAAT ([Liu et al, 2021]) and TAED ([Tang et al, 2023]) are the two existing papers most closely related to this paper. The key technique in this paper draws inspiration from the line of work on monotonic attention (such as [Raffel et al, 2017] and [Arivazhagan et al, 2019])
- CAAT separates self attention and cross attention layers in the Transformer decoder, using the stack of self attention layers as the Transducer predictor to encode the output history without any alignment information, and the stack of cross attention layers as the joiner. As a result, the standard dynamic programming algorithm for computing the sum of alignment path probabilities in a Transducer lattice is still applicable. Because waiting until the end of source gives the joiner the most complete source information, a latency loss is necessary in CAAT training to prevent degenerate model behavior.
- TAED took a different path and kept the Transformer decoder intact as the Transducer predictor. At each time step $t$, the Transformer decoder states are completely recomputed with all the source features available so far ($h_{1:t}$). As a result, the standard dynamic programming algorithm for computing the sum of alignment path probabilities in a Transducer lattice is also applicable at the cost additional Transformer decoder computation. An AED loss term is introduced to, ostensibly, prevent degenerate model behavior similar to CAAT.
- Mono-Attn-Transducer took the average cross attention feature technique commonly employed in monotonic attention ([Raffel et al, 2017] and [Arivazhagan et al, 2019]) to approximate the sum of alignment path probabilities, with a novel alignment prior mechanism and a two-stage estimation. The strong alignment priors ensure that training would not lead to degenerate model behavior.
[Raffel et al, 2017]: https://proceedings.mlr.press/v70/raffel17a/raffel17a.pdf
[Arivazhagan et al, 2019]: https://arxiv.org/pdf/1906.05218
[Tang et al, 2023]: https://aclanthology.org/2023.acl-long.695.pdf
[Liu et al, 2021]: https://aclanthology.org/2021.emnlp-main.4.pdf
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths
- The novel training method is effective and elegant in not needing a latency loss.
- The reported speech-to-text translation results are strong even though the comparison with CAAT/TAED is inconclusive.
Weaknesses
- Key details are missing that prevent the readers from fully understanding or reproducing Mono-Attn-Transducer.
Other Comments Or Suggestions: - Euqation (1) is inaccurate and misleading for readers not familiar with Transducers. $\alpha(t, u)$ and $\beta(t, u)$ are marginal probabilities over alignment paths, not probabilities over the output prefix/suffix without blanks,
- $\alpha(t, u)$ is the sum of probabilities of partial alignment paths where $y_u$ being produced at $x_t$ is the last alignment label. $p(y_{1:u} | x_{1:t})$ is a confusing notation that strictly speaking includes probabilities of alignment paths where $y_u$ is produced anywhere between $x_1$ to $x_t$, followed by zero or more blanks.
- $\beta(t, u)$ is the sum of probabilities of partial alignment paths starting right after $y_u$ being produced at $x_t$. $p(y_{u+1:U} | x_{t:T})$ thus has the same problem as $p(y_{1:u} | x_{1:t})$. Additionally, $x_i$ are the source input features, instead of the encoder features $h_t$, so $\beta(t, u)$ depends on $x_{1:T}$ regardless of $t$ because $h_t$ in most unidirecitonal encoders can depend on the entire $x_{1:t}$, thus $\beta(t, u)$ depends on the entire $x_{1:T}$.
- Equation (4) should probably include $y_{u-1}$ on the right hand side similar to Equation (5) of [Tang et al, 2023].
- Section 3.2.1 should emphasize that the alignment posterior is approximate.
- (230-235, left column): Bernoulli variables in monotonic attention and blanks in Transducer are mathematically equivalent. Standard Transducer simply folds the Bernoulli variable into a multi-class prediction. A variant of Transducer [Variani et al, 2020] actually uses a separate Bernoulli variable. Conversely, Transducer's dynamic programming algorithm can also be used to compute the marginals in monotonic attention (and is often one or two orders of magnitude faster than the so-called "efficient" algorithm in [Ma et al, 2023b]).
[Tang et al, 2023]: https://aclanthology.org/2023.acl-long.695.pdf
[Variani et al, 2020]: https://arxiv.org/pdf/2003.07705
[Ma et al, 2023b]: https://arxiv.org/abs/2312.04515
Questions For Authors: As outlined above, my main concerns are the comparison with CAAT, and the lack of key architecture details.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **Thank you for your efforts in reviewing! We will make every effort to respond to your concerns.**
>***1.However I would need some more information from the authors before I could make a good assessment.***
**We will address your questions about the method point by point.**
1. >*It appears that the overall architecture of Mono-Attn-Transducer is very similar to TAED except that previous decoder states are not updated when new source features become available. This however is only my best guess.*
Yes, in the inference of Mono-Attn-Transducer, the generated predictor states are not updated when receiving new source features. When the predictor encodes the $u$-th target state, it depends on previous predictor states and the currently available source:
$s_{u} = f_{\theta}(s_{0:u-1},h_{1:g(u)},y_{u-1})$
2. >*$c_u$ is the approximate average cross attention output based on my interpretation of Equation (8). However $c_u$ depends on $s_u$, the predictor state which in turn depends on the actual alignment path $g(\cdot)$. It is not clear to me which alignment path is used to produce $s_u$.*
In fact, the core idea of the approach lies in the uncertainty of the alignment path used to produce $s_u$ in training. Therefore, a posterior approximation is required to guide the learning process. To analyze this issue, we can further examine Eq. 4 in detail. In fact, the predictor producing $s_u$ primarily relies on two steps:
A. **Self-attention** with earlier predictor states ($s_{0:u-1}$): **This is deterministic.**
B. **Cross-attention** with encoder states ($h_{1:g(u)}$): Since the specific value of $g(u)$ is unknown in training, **we instead estimate the posterior alignment (Eq. 7) and use the resulting alignment probability to approximate the expected context representation $c_u$ in the cross-attention (Eq. 8).**
3. >*Further, it is not clear to me how $c_u$ is used. My guess is that it's passed to the joiner to join with each $h_t$ to produce the Transducer sum of alignment path probabilities.*
As mentioned earlier, $c_u$ represents the expected context representation (**in other words, $c_u$ is the output of the cross-attention module**). This $c_u$ is then passed to the next module in the predictor, typically an FFN. **We refer to the final output of the predictor as $s_u$** (Eq. 4), which is subsequently passed to the joiner.
>***2.Key details are missing that prevent the readers from fully understanding or reproducing Mono-Attn-Transducer.***
**We hope the previous clarifications help better understanding. We also provide the source code in this [anonymous URL](https://github.com/RandomUsername192/AnnoymousMaterials) to ensure reproducibility.**
>***3.Concerns on Baseline Results***
1. >*Clarification of CAAT results.*
We used the code provided by CAAT for replication but did not obtain the results reported in their paper. Since CAAT did not provide the distillation data used to train their model, we trained with the same data we used for training the Transducer, which could be the source of the discrepancy. The official CAAT config (audio_cat) does use fewer params; however, we argue that this is a compromise made due to its $O(T)$ memory usage in training.
2. >*Clarification of S2S results.*
In fact, Zhao et. al specifically augmented the speech synthesis module, while the Transducer remains unchanged. They use an acoustic LM to refine semantic tokens generated by Transducer and then generate the waveform. **However, their design is not directly related to the Transducer itself, but rather to improving waveform generation.** Our focus is on the Transducer, and we opt for a standard approach [1]: directly converting generated semantic tokens to waveforms using a vocoder.
[1] Speech Resynthesis from Discrete Disentangled Self-Supervised Representations
>***4.Comments on definition of $\alpha(t,u)$ and $\beta(t,u)$.***
We noticed you argued that $\alpha(t,u)$ is the sum of probabilities of partial alignments where $y_u$ being produced at $x_t$ is the last alignment label.
**However, this is not correct. $\alpha(t,u)$ does incorporate the probabilities where $y_u$ is followed by zero or more blanks. $\alpha(t,u)$ is precisely defined as the probability of generating $y_{1:u}$ from $x_{1:t}$ in the original Transducer paper (see Section 2.4 of the paper). Similarly, $\beta(t,u)$ is the probability of outputting $y_{u+1,U}$ from $x_{t:T}$. There is no need for $y_u$ being produced at $x_t$**.
>***5.Suggestions on Equation (4) and Section 3.2.1.***
Thank you! We will fix it.
>***6.Comments on Equivalence between monotonic attention and Transducer.***
We quite agree with this statement. Our approach actually leverages the DP algorithm of the Transducer to assist in the computation of monotonic attention. We will include Variani et al. (2020) in the discussion of the paper. Thank you for the reminder. | null | null | null | null | null | null |
LAION-C: An Out-of-Distribution Benchmark for Web-Scale Vision Models | Accept (poster) | Summary: This paper presents a novel dataset, designated as LAION-C, where the letter C denotes "corrupted." This dataset bears similarities to ImageNet and ImageNet-C.
The dataset under consideration contains six corruptions (Mosaic, Glitched, Vertical Lines, Stickers, Geometric Shapes, Luminance Checkerboard) and 16 superclasses.
Rather than focusing on natural corruptions, the objective is to create a highly synthetic corrupted dataset.
The evaluation process entails the psychophysical observation of 19 participants.
## update after rebuttal
None
Claims And Evidence: The correlation between model size and robustness is positive; that is to say, the larger the model, the more robust it becomes.
This assertion is supported by empirical evidence from experiments conducted on vision foundation models, which involved the utilization of 19 human evaluators. However, it is noteworthy that the performance of these human evaluators frequently surpasses that of the models.
Methods And Evaluation Criteria: The provided dataset is a benchmark for out-of-distribution (OOD) detection. It has been observed that the accuracy of the models decreases to a greater extent than in ImageNet-C; therefore, the utilization of this dataset is logical.
Theoretical Claims: None
Experimental Designs Or Analyses: The primary concern pertains to the composition of the evaluator cohort, which consists of 19 individuals.
A comparison between ImageNet-C and LAION-C demonstrated the significance of this factor.
Supplementary Material: The supplementary material contains code, the superclass mapping, and three scripts. The first script is used to apply distortions. The second script is used to generate a custom dataset. The third script is used to generate trial folders. In this process, random images are selected.
Relation To Broader Scientific Literature: OOD detectors have become better and is still a challenge.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths
- This dataset can be a new challenge in OOD detection.
- Model often outperform humans.
- Their results show that vision foundation models evaluated on ImageNet-C are robust.
But evaluated on LION-C they show less accuracy.
- Dataset was manually filtered. To ensure that only one object is on the image.
- Mechanical writing.
Weaknesses
- Only 16 classes: Some OOD detectors could be class dependant, and fewer classes could be easier for them.
- The explanation why artificial corruption are important.
- there is no categorization of distribution shift. Covariate? [1]
[1] https://arxiv.org/pdf/2501.18463v2
Other Comments Or Suggestions: None
Questions For Authors: - Did you take into account that the ImageNet has .png (validation set) images and the Laion-C has .jpg compressed images?
- Are 19 evaluators a good number?
- Why only 16 classes? Isn't that easier for some OOD detectors? The hierarchical 1000 classes of ImageNet would be more difficult if the detector takes the class probabilities into account?
- There are different intensity levels. The ImageNet-C does not have this. If I want to use this dataset, how should I use the different intensity levels?
Ethical Review Flag: Flag this paper for an ethics review.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your favourable review. We appreciate that you find the utilization of LAION-C **logical** because it **poses a new challenge for OOD-evaluation**, and **value the human experiment**. Prompted by your feedback, we will update the manuscript to include a section motivating our artificial corruptions better, and publish a version of our benchmark that consists of 1,000 class labels (LAION-C-1K). Thank you for alerting us to the existing nomenclature regarding the different types of distribution shifts (semantic vs covariate shifts); we have updated the manuscript accordingly. Our shift is indeed of the covariate shift kind.
*Q: “Did you take into account that the ImageNet has .png (validation set) images and the Laion-C has .jpg compressed images?”*
A: Thank you for raising this point. We double-checked that ImageNet images are JPEG-compressed in both training and validation sets. Since LAION-C base images were taken from the ImageNet validation set, analogous to ImageNet-C, this design choice matches other benchmarks. We therefore also use JPEG-compression, setting parameters which give close to lossless compression.
*Q: “Are 19 evaluators a good number?”*
A: Valid question. Yes, 19 evaluators are a sufficient and typical number for this type of study: Since each human observer is exposed to 2 corruptions, every corruption is seen by at least 6 different humans, assuring good coverage. For reference, [1] used only 4 humans per corruption for most of their corruptions. In psychophysics, this type of design is called a “small N design” (fewer participants that see many images, [3]). More quantitatively, we see in Fig. 10 that the 95% confidence intervals surrounding our estimate of the average human performance are quite small (+- 4.61%). Statistically speaking, it is therefore very unlikely that adding more human observers would change any of the results or interpretations. We will add a detailed discussion of this to the Appendix. Since beating the best human is more difficult than beating the average human, we analyze peak performance rather than average performance in Fig. 5 and it is impressive to see that good models beat every single human. The confidence interval is smaller for the models because they were evaluated on all images, while each human only saw a subset of images for practical reasons. In the Appendix, we will now include a comparison to both average and peak human performance."
*Q: “Why only 16 classes?”*
A: We used 16 classes to enable a comparison to human observers in line with earlier work [1, 2], because humans cannot be expected to reliably classify images into too many categories, simply due to pragmatic limitations (size of the response icons, time per trial etc.) Prompted by your suggestions, we have decided to also provide a version of our dataset consisting of all 1,000 ImageNet classes, which we call LAION-C-1k. The dataset is ready and has been submitted for review on Zenodo. While we are not including a direct link at this stage respecting the rebuttal policies, we will link the dataset in the camera-ready version.
*Q: “How should I use the different intensity levels?”*
A: We included different intensity levels so that practitioners would be able to see the decline in model performance and obtain a more fine-grained evaluation of robustness. This is done by some other OOD datasets as well, like model-vs-human [1] and ImageNet-C. The protocol is to evaluate a new model on all levels of our benchmark and report the average across levels for the benchmark result (we’ll make sure to make this clear in the manuscript and github to ensure comparability). Additionally, users are free to report performance curves like we do in e.g. Fig. 5.
[1]: Partial success in closing the gap between human and machine vision (Geirhos et al., 2021)
[2]: Generalisation in humans and deep neural networks (Geirhos et al., 2018)
[3] Small is beautiful: In defense of the small-N design (Smith and Little, 2018)
---
Rebuttal Comment 1.1:
Comment: Q: “Did you take into account that the ImageNet has .png (validation set) images and the Laion-C has .jpg compressed images?”
Well, but if you add the corruptions to the image and safe again as jpeg. Shouldn't it compressed again?
Thanks for clarifying the other questions. It is more understandable now.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We are pleased that our previous responses have helped clarify your questions. We export the corrupted images with parameters subsampling=0 and quality=100 using PIL. These parameters should result in an effectively lossless compression, making the effect of compression negligible. Saving images in JPEG format significantly saves on storage space and makes the data more accessible, and is therefore standard practice. Since ImageNet itself consists of JPEG images, there should not be detrimental effects due to this format. | Summary: This paper proposed a new dataset called LAION-C for evaluating image classification model robustness against out-of-distribution data. The LAION-C datasets select samples from ImageNet validation set, and added with 6 different types of synthetic distortions, each with 5 different levels of severity. A substantial evaluation was conducted to assess different commonly used neural network models performance on LAION-C dataset. A human review was also conducted to compare human and model performance.
Claims And Evidence: 1. One claim from the paper is that the state-of-the-art image recognition models were able to perform well on OOD datasets, while degrades substantially on LAION-C, which suggests LAION-C being a valuable dataset to test model performance for OOD datasets. This is supported by results in Figure 3. There is a consistent performance degradation for all the evaluated models from ImageNet-C to LAION-C. I think purely from performance point of view, LAION-C is a more challenging benchmark than ImageNet-C.
2. Another claim from the paper is that models fine-tuned on LAION-C dataset can improve the performance that is comparable to clean dataset performance. This is supported by results in Table 1. This looks promising except for category Stickers and Mosaic Shape. It would be helpful to provide more insights on why these two types of distortions remain more challenging than other distortions. To be more specific, Stickers and Geometric Shape distortions both block information from clean image, but Geometric Shape distortion can be handled very well with re-training. Similarly, Mosaic and Vertical Lines obfuscate information, but Vertical Lines performance improves significantly with retraining.
3. Another claim from the paper is that models can achieve comparable or superior performance in some OOD classes than human, while still struggle on other categories. For this claim, the evidence was provided in Figure 5. For this claim, more justification on the human performance is needed. We can see the best model performance has a much narrower 95% confidence interval compared to human. Is the best human performance based on aggregation of all the participates, or is it based on overall best individual performer? Given the small subject population and no-expertise required for participating the analysis, it would be more proper to consider claiming on average human performance rather than best human performance.
4. Another claim from the paper is that LAION-C dataset is a challenging dataset but can potentially be solved. This is also supported by Table 1. I have some doubts on this claim as we cannot tell if the improved performance of model trained with LAION-C poses any regression on the clean data performance. In other words, it is unclear whether the model overfits to the distortion introduced in LAION-C or not. It would be helpful to show the same model (trained with LAION-C data) performance on clean and distorted images side-by-side.
Methods And Evaluation Criteria: 1. The distortions were manually designed and programmatically generated. While it does pose challenge for current recognition models, I'm a bit skeptical about the practicality of these kind of distortion in real-world use case. It would be good to give motivation on why choosing these 6 types of distortions.
2. The LAION-C uses clean images from ImageNet. If so, why it is called LAION-C? Shouldn't it be called another variant of ImageNet? More importantly, LAION dataset is known for its web-scale and diversity. But LAION-C is neither from web-scale images or diverse in terms of image classes. I think calling it LAION-C is misleading.
3. The LAION-C dataset only have 273 clean original images. I understand that to have manageable human review experiment, we may not be able to use large scale dataset. But for a final release, I would suggest consider extending the number of images and number of classes further.
Theoretical Claims: There is no theoretical claims.
Experimental Designs Or Analyses: Centering around the proposed LAION-C dataset, the paper designed a series of experiments to answer following questions: 1) How challenging is the LAION-C dataset to modern image recognition models? 2) How human performs on LAION-C dataset? 3) Is there any difference in errors made by model and human? 4) Can LAION-C dataset be solved by recognition model? I found the experimental design is in general thoughtful and comprehensive. For example, on evaluating the extend of OOD of LAION-C, the authors utilized three different approaches including qualitative assessment, quantitative measurement of model performance, and quantitative measurement of difference between datasets.
The main concern I have though is all the analysis and discussion anchors on difference from ImageNet-C. This is probably originated from the fact that LAION-C use original images from ImageNet. There is not much discussion or contrast with other OOD dataset. Furthermore, model OOD work often use cross-dataset evaluation for evaluating model capability on handling OOD data. For example, in [1] where the model is trained on ImageNet-1K and tested on a bunch of other datasets.
[1] Miyai, Atsuyuki, et al. "Locoop: Few-shot out-of-distribution detection via prompt learning." Advances in Neural Information Processing Systems 36 (2023): 76298-76310.
Supplementary Material: I have review the majority of the contents in supplementary materials. I found the datasheet especially helpful to understand the contribution and details of the dataset.
Relation To Broader Scientific Literature: When discussing related work, the focus of this paper in on contrasting the LAION-C dataset and ImageNet based dataset. There is no discussion on other OOD datasets. In addition, there is no experiment design related to cross-dataset evaluation protocol, which is often adopted in OOD literature.
Essential References Not Discussed: Some of the references that can be added are
[1] Miyai, Atsuyuki, et al. "Locoop: Few-shot out-of-distribution detection via prompt learning." Advances in Neural Information Processing Systems 36 (2023): 76298-76310.
[2] Yang, Jingkang, et al. "Generalized out-of-distribution detection: A survey." International Journal of Computer Vision 132.12 (2024): 5635-5662.
Other Strengths And Weaknesses: [Other strengths]
+ A variety of model image recognition models were evaluated and compared for performance on LAION-C
+ The error consistency analysis between human and models is insightful to show different failure modes of human vision and computer vision.
+ The appendix provides a comprehensive discussion on what is LAION-C and how it is constructed using the template format.
[Other weaknesses]
- There is no justification from practical or theoretical point of view why using the 6 proposed distortions. The proposal of distortion types seem a bit arbitrary. It is unclear about the implication of a model performing well on LAION-C dataset.
- There is limited novelty in terms of dataset construction in the three aspects: 1) the clean images are selected from existing ImageNet dataset; 2) the way of constructing OOD samples is based on existing practice by artificially adding obstruction to or alternating the image, which has been done in ImageNet-C. The only difference is the type of distortion; 3) there is no new proposals on the evaluation of OOD performance, which is still based on multi-class classification accuracy..
Other Comments Or Suggestions: Typo in supplementary materials: 131.040 should be 131,040.
Questions For Authors: 1. I do not fully understand why this proposed dataset is called LAION-C while all its original clean images come from ImageNet. In addition, the number of original images is only 273 from 16 classes, even though there are a total of 131,040 images accounting for all distortion variations. So I don't think it resembles the diverse collection and scale of LAION dataset. It would be helpful if the authors can clarify the rationale of naming the dataset as LAION-C instead of another variant of ImageNet. It seems the same kind of distortion can be applied to LAION images as well, in which case the resulting dataset can also be called 'LAION-C'. Why not?
2. On the justification of LAION-C can be solved, I don't think the Table 1 result is sufficient. A critical question is whether the improvement of LAION-C accuracy can be achieved without regression on clean dataset. It would be good to evaluate the same model performance on clean dataset (see more discussion in 'Claims And Evidence'). If there is no regression on clean data, that means model performance on LAION-C can simply be improved by adding distortion as an augmentation. It is indeed promising to solve LAION-C. Otherwise, the challenge of achieving robustness on unseen distortions remain unsolved and more evidence is needed on whether LAION-C can be solved.
3. OOD evaluation in recent literatures are typically based on using distinct training and testing datasets such as training under ImageNet and testing on iNaturalist. There is no discussion or comparison of LAION-C with this popular OOD evaluation protocol. It may be that model performance on ImageNet-C becomes more saturated, but I'm not sure it is the case that such cross-dataset evaluation also becomes saturated. In that sense, the claim on modern image recognition models saturated on current benchmark is not well supported. Could the authors clarify what is the level of difficulty of evaluating on LAION-C compared to other cross-dataset evaluation protocol?
4. Related to point 3, what is the main novelty of constructing LAION-C?
5. Related to point 1 and 2, what is the significance of solving LAION-C? Can we draw a better conclusion that if a model does a better job on LAION-C, it has better robustness against OOD data regardless of its performance on other OOD evaluation benchmark? If not, what other insights we can gain from the performance on LAION-C dataset?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Dear Reviewer,
thank you for taking the time to provide such detailed and insightful feedback. We were delighted to read that you found our **experimental design [to be] in general thoughtful and comprehensive**, and appreciated the **substantial evaluation and insightful error analysis**. We have clarified the manuscript based on your suggestions, added citations, and scaled the dataset to a substantially larger version with 1k classes, for a total of 1.5M images.
*Q: Motivation for choosing these 6 types of distortions.*
Our distortions needed to fulfill the desiderata of 1. being exotic enough to have a low occurrence probability even in web-scale datasets and 2. testing relevant feature extraction capabilities of the models. We now include this motivation in Sec 2.1.
To solve the Stickers and Mosaic distortion, a model needs to be able to holistically integrate the image, instead of being led astray by local image cues induced by sub-images, which is notoriously difficult for DNNs [1].
The Glitch and Vertical Lines distortions are the most exotic and globally disruptive image transformations we could find, and destroy the texture cues that models rely on [1].
The Geometric Shapes distortion tests amodal completion, a staple of human visual processing even in infants [2,3]. They also change the color distribution of the image, which humans are robust to because we do not rely primarily on color for object recognition [4,5].
The Luminance Checkerboard distortion tests a model’s ability to adapt to local lighting conditions, an important skill of the human visual system [6].
[1] Geirhos et al 2019
[2] Kellmann and Spelke 1983
[3] Nanay 2018
[4] Tanaka and Presnell 1999
[5] Biedermann and Ju 1988
[6] Heeger 1992
*Q: not much discussion or contrast with other OOD dataset*
Thanks for the suggestion - we have added a figure contrasting LAION-C with other OOD datasets: https://ibb.co/HfZmLkTd. This shows that LAION-C offers better resolution of model differences. Furthermore, in the camera-ready version we will substantially expand our discussion of other OOD datasets.
*Q: I don't think the Table 1 result is sufficient.*
Thanks for the great suggestion. While our initial approach demonstrates learnable signals in LAION-C, your suggestion offers a stronger proof of concept. We now fine-tuned the same model backbone as in Tab.1 on a mixture of clean and distorted ImageNet-Train images and evaluated on both ImageNet-Val 16 class and LAION-C 16 class. The results illustrated here (https://ibb.co/rGmQk2rG) indeed show that *‘improvement of LAION-C accuracy can be achieved without regression on the clean dataset’*. We will update Sec. 3.3 accordingly with the results and full experiment setup to strengthen our argument.
*Q: LAION-C only has 273 images*
We selected 273 images *per class*, yielding 4,368 base images and ~130k total images after corruption at different intensity levels. Nonetheless, we hear your point regarding broad coverage and scale. To address it, we extended our dataset to the full 50k IN-validation set, resulting in LAION-C-1k with 1.5 million images. The dataset is ready and has been submitted for review on Zenodo. While we are not including a direct link at this stage respecting the rebuttal policies, we will link the dataset in the camera-ready version.
*Q: Why is it called LAION-C?*
Great question. We follow common practice in the OOD literature, where datasets are named w.r.t. the dataset for which they’re OOD. E.g., ImageNet-Sketch is designed to be OOD for ImageNet (even though none of the sketch images are part of ImageNet since that would defeat the purpose of being OOD). We now explain this motivation in the introduction. Applying corruption to LAION images directly is possible, but they don’t have class labels, thus cannot be used (without proxy/pseudo labels) for classification.
*Q: [M]ore justification on the human performance is needed.*
Please see our response to ZcBa.
*Q: LAION-C difficulty vs other protocols?*
Our main results (Fig. 4) follow a cross-dataset evaluation protocol: The models were trained on web-scale datasets, fine-tuned on IN-1k but evaluated on LAION-C. In general, it is possible to apply any OOD evaluation protocol to LAION-C that one could also apply to other OOD-datasets, such as ImageNet-[A, R, C, Sketch].
*Q: Why are sticker and mosaic distortions more challenging?*
Great question. We introduced content-rich image tiles into the original image, to test the model's ability to process images holistically in spite of local distractions. These two distortions are built on simpler augmentations that mask or pixelate content, as illustrated in https://ibb.co/CpBvR2Y3. We observe an up to 35% drop on ViT-B32 compared to these easier distortions (https://ibb.co/hJj8pfQw), indicating that local signals pose a significant challenge. Fig.12 also supports this, as LLM-based models frequently direct attention toward the inserted tiles. | Summary: This paper introduces LAION-C, a new benchmark dataset for evaluating out-of-distribution (OOD) robustness of web-scale vision models. The authors argue that existing benchmarks like ImageNet-C are no longer sufficiently OOD for models trained on massive web datasets like LAION, as these models are likely exposed to similar corruptions during training. LAION-C proposes six novel, synthetic distortion types designed to be genuinely OOD even for web-scale models. The paper presents evaluations of various state-of-the-art models, including large language models with vision capabilities, on LAION-C and compares model performance to human psychophysical data, suggesting a paradigm shift where models are now matching or exceeding human OOD robustness in certain scenarios.
Claims And Evidence: The authors report that existing OOD evaluation benchmarks such as ImageNet-C are not truly OOD anymore as LAION-based datasets basically train on whole web. This fact is anecdotally supported in Figure 1, performance of zero-shot models is compared between the ImageNet-C test set and the LAION-C one to show LAION-C is harder to solve and finally, a FID score is computed to support representation differences between datasets.
These claims adequately support the limitations of ImageNet-C and the added difficulty of LAION-C for zero-shot models.
Methods And Evaluation Criteria: The authors evaluate a large suite of vision models as well as some experiments with human discriminators to put in parallel the difficulty increase between human and machine when increasing the perturbation intensity.
More ImageNet OOD variants could have been studied such as ImageNet-Adversarial [1] or ImageNet-Sketch [2] which would represent more naturally occurring OOD perturbations.
[2] Hendrycks, Dan, et al. "Natural adversarial examples." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021.
[3] Wang, Haohan, et al. "Learning robust global representations by penalizing local predictive power." Advances in neural information processing systems 2019.
Theoretical Claims: No theoretical claims
Experimental Designs Or Analyses: Good design, other benchmarks OOD than ImageNet-C should be added and compared with.
Supplementary Material: I glanced at the human annotation setup
Relation To Broader Scientific Literature: This research is relevant to the community as it shows the limits of the ImageNet-C benchmark for OOD robustness and proposes an alternative.
Essential References Not Discussed: [2] Hendrycks, Dan, et al. "Natural adversarial examples." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021.
[3] Wang, Haohan, et al. "Learning robust global representations by penalizing local predictive power." Advances in neural information processing systems 2019.
Other Strengths And Weaknesses: The artificial perturbations proposed are effective at degrading the performance of zero-shot models, loosely align to human perception (Figure 5) and have been shown to be fair because learnable (Table 1).
One limitation though is that the synthetic nature of the perturbations questions whether they represent real world occurrences to test actual failure cases of zero-shot models when deployed to the real world.
In this scope, a complementary approach similar to ImageNet-Adversarial but applied to LAION would be relevant.
Other Comments Or Suggestions: The paper oversells the "paradigm shift" claim as the perturbations proposed and in the same vein as the ImageNet-C corruptions.
Questions For Authors: Have the authors considered constructing a OOD test set in a similar manner to the ImageNet-A dataset but for LAION to test real world failure cases ?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer,
thank you for your insightful comments! We are delighted that you find our work to have a **good experimental design** and consider it **relevant to the community**.
*Q: “More ImageNet OOD variants could have been studied / compared with”*
A: Thanks for this excellent suggestion. We have added a comparison of LAION-C and several well-established OOD datasets to the paper, see https://ibb.co/HfZmLkTd, which shows that LAION-C captures the variance in model performance better than other datasets, with a standard deviation of ~27%, whereas other common OOD datasets, on average, have only ~10%. LAION-C is tested on a 16-class basis, while other datasets typically use 200-1000 classes, making this result even more remarkable. We want to note, however, that the point of LAION-C is precisely not to capture “naturally occurring” perturbations (see next question). We also added the missing reference to ImageNet-Sketch you pointed out (note the related work section in Appendix A.1, where we already refer to ImageNet-A).
*Q: “[T]he synthetic nature of the perturbations questions whether they represent real world occurrences [...]”*
A: Indeed, LAION-C is designed to be synthetic, just like ImageNet-C for ImageNet. “Natural” images as used by other common OOD datasets are scraped from the web, making them no longer OOD in an era where large multimodal foundation models are trained on nearly the entire web, for multiple epochs: as of 2025, we need to assume that most natural images found on the web were part of the training corpus. Robust representations, however, should generalize to challenging images (e.g. identity-preserving transformations) no matter the type, both synthetic and natural—just like human perception is robust to many different types of OOD data. Being able to handle challenging synthetic input typically transfers to unexpected natural input, as evidenced by the “sim2real” line of work e.g. in robotics and autonomous driving.
*Q: “Have the authors considered constructing a OOD test set in a similar manner to the ImageNet-A dataset but for LAION to test real world failure cases?”*
A: Thank you for the suggestion. In the current era of large-scale training, we have to assume that models will eventually be trained on the entire internet, hence scraping the web for benchmark images, as was done for ImageNet-A, is no longer a feasible choice for obtaining an OOD dataset.
*Q: “The paper oversells the paradigm shift claim as the perturbations proposed are in the same vein as the ImageNet-C corruptions.”*
A: This might be a misunderstanding. The “paradigm shift” we refer to is the shift from humans outperforming models (as observed in prior work [1,2,3,4,5]) whereas now, based on our carefully collected human comparison data for LAION-C, the best models outperform humans. Apologies if that wasn’t clear, we’ll make sure to highlight and explain this better.
[1] Partial success in closing the gap between human and machine vision (Geirhos et al., 2021)
[2] Comparing deep neural networks against humans: object recognition when the signal gets weaker (Geirhos et al., 2017)
[3] ViewFool: Evaluating the Robustness of Visual Recognition to Adversarial Viewpoints (Dong et al., 2023)
[4] Recognition in Terra Incognita (Beery et al., 2018)
[5] Why do deep convolutional networks generalize so poorly to small image transformations? (Azulay & Weiss, 2019) | null | null | null | null | null | null | null | null |
UniMC: Taming Diffusion Transformer for Unified Keypoint-Guided Multi-Class Image Generation | Accept (poster) | Summary: This paper proposes a dataset of human and animal images, and their keypoints, bounding boxes, and fine-grained captions. The dataset includes 786K images with 2.9M instances, averaging 3.66 instances per image. The annotations are obtained from the best among several candidates, different annotations having different best choices. Based on this dataset, the paper also proposes a controllable DiT-based framework for keypoint-guided image generation. The model can generate multiple subjects using their keypoints, class, bounding box, and a global text prompt. Both qualitative and quantitative results shows that the model significantly outperforms the previous work.
## update after rebuttal
The rebuttal addresses my most concerns. Since I already gave accept, I will keep my rating.
Claims And Evidence: The dataset is for generation purpose, and thus the average aesthetic score is higher than other keypoint datasets, such as COCO.
The proposed model is tested and ablated, showing better performance than baselines.
Methods And Evaluation Criteria: 1. This paper directly use keypoint and bounding box coordinates as controlling signal without converting them into spatial representations, such as heatmaps, but into Fourier signals, and it works better than previous work using spatial representations. It slightly counters my intuition, but also makes sense because processing in the implicit way may benefit interation better. This setting could inspire future work on image/video mulit-object interaction generation.
2. The detailed dataset processing and comparison of different methods for different annotations can benefit the community.
Theoretical Claims: N/A
Experimental Designs Or Analyses: 1. From Table 3, the class and pose accuracy is significantly improved for animals, but those metrics for human have only mild improvement compared with ControlNet. What might be the reason? Can it be visualized by some attention maps or feature maps?
2. Some of the magic number should be explained. For example, why use 50% dropout for bbounding box while only 15% for keypoints? I understand it must be a performance choice, but is there any insight why? How does the dropout ratio affect the quantitative and qualitative results?
Supplementary Material: I checked all of them.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Dear Reviewer u8PW**,
Thank you for your review and constructive comments. During the rebuttal period, we have made every effort to address your concerns. The detailed responses are below:
> Q1: Why human shows mild improvement compared with ControlNet?
>
While our method shows only mild improvement on the *human* class compared to ControlNet, it brings **substantial gains across other categories**, especially animals, as shown in Table 3 of the main paper. This highlights the strength of our approach in handling diverse object classes.
Our core objective is to solve the task of **keypoint-conditioned multi-class image generation**, where *human* is just one of many categories. In contrast, ControlNet is specifically designed for **human-centric** keypoint-conditioned generation, and thus naturally excels in that narrow domain.
This difference is analogous to a **specialist vs. generalist** comparison. A specialist may outperform others on a single task, but a well-designed generalist can achieve strong performance across a broader spectrum. UNIMC is purposefully built as a generalist model that scales across multiple categories, not just humans.
> Q2: Explanation of dropout ratio choices.
>
In our initial experiments, we applied a **15% dropout ratio** to both bounding boxes and keypoints. However, we observed that the model quickly overfit to the bounding box information—likely because **coarse object localization is much easier to learn than fine-grained keypoint patterns**. As a result, the model tended to ignore the keypoint inputs, and the pose accuracy plateaued below 20%, regardless of the number of training iterations.
To address this issue, we increased the dropout ratio for bounding boxes, forcing the model to rely more on the keypoints. The table below shows how different dropout ratios for bounding boxes affect pose accuracy:
| Bounding box dropout ratio | Human AP**⬆️** | Animal AP**⬆️** |
| --- | --- | --- |
| 0% | 14.70 | 13.92 |
| 15% | 17.85 | 18.21 |
| 50% | 30.01 | 28.38 |
| 100% | 26.42 | 26.09 |
These results suggest that a **moderate dropout (around 50%)** strikes a good balance, encouraging the model to make better use of keypoint information without completely discarding spatial cues from bounding boxes. | Summary: The paper proposes a DiT based framework UniMC for keypoint guided multi-instance image generation and introduces HAIG-2.9M dataset designed for keypoint-guided human and animal image generation. Experiments are conducted on COCO, APT36K, and HAIG-2.9M datasets to show the efficacy of the proposed method.
Claims And Evidence: The paper motivated the absence of a joint Human-animal keypoint datasets for training a unified model and introduced the HAIG-2.9M dataset. However, the need for a joint dataset is not justified. An ablation study on the performance for a subset of testing images that contains only multiple instances per image in HAIG-2.9M is not shown.
Methods And Evaluation Criteria: Table 4 shows the results of using different datasets for training and evaluating on the HAIG-2.9M testing dataset. However, this is not a fair comparison since it is essentially comparing cross dataset generalization to in dataset generalization. It should also show evaluation on COCO+APT testing datasets.
Theoretical Claims: There are no theoretical claims in the paper.
Experimental Designs Or Analyses: Missing comparisons on related prior works such as
[1] Dewei Zhou, You Li, Fan Ma, Xiaoting Zhang, Yi Yang. MIGC: Multi-Instance Generation Controller for Text-to-Image Synthesis, CVPR 2024
[2] Xudong Wang, Trevor Darrell, Sai Saketh Rambhatla, Rohit Girdhar, Ishan Misra. InstanceDiffusion: Instance-level Control for Image Generation, CVPR, 2024
How does the proposed method compare qualitatively and quantitatively against [1,2]?
Supplementary Material: Yes fully
Relation To Broader Scientific Literature: The proposed method is not entirely novel as Diffusion Transformers have shown to be useful for coordinate regression in prior works [1-4]
Essential References Not Discussed: [1] Naoto Inoue, Kotaro Kikuchi, Edgar Simo-Serra, Mayu Otani, Kota Yamaguchi LayoutDM: Discrete Diffusion Model for Controllable Layout Generation, CVPR 2023.
[2] Yilin Wang, Zeyuan Chen, Liangjun Zhong, Zheng Ding, Zhuowen Tu, Dolfin: Diffusion Layout Transformers without Autoencoder, ECCV 2024
[3] Runyang Feng, Yixing Gao, Tze Ho Elden Tse, Xueqing Ma, Hyung Jin Chang DiffPose: SpatioTemporal Diffusion Model for Video-Based Human Pose Estimation, ICCV 2023
[4] Shoufa Chen, Peize Sun, Yibing Song, and Ping Luo. Diffusiondet: Diffusion model for object detection, ICCV 2023.
Other Strengths And Weaknesses: The paper uses DiffPose for annotating the HAIG-2.9M dataset and also evaluate using the same model. This may introduce a bias in the evaluation.
Other Comments Or Suggestions: The major concern is the missing comparisons with closely related prior work [1,2].
Questions For Authors: What is the number of images used for FID comparison?
What is the performance of HAIG with and without multiple instances in the dataset? It will be better to show the performance improvement coming from multi-instance images.
Why not annotate with the ensemble of models instead of the best model (Sec 4.2, L275).
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Dear Reviewer Rf97,**
Thank you for your review and constructive comments. During the rebuttal period, we have made every effort to address your concerns. The detailed responses are below:
> Q1: The need for a joint human-animal keypoint-annotated dataset.
>
As shown in **Table 4** and **Figure 6** of the main paper, training on separate human and animal keypoint datasets makes it difficult to generate images with multiple classes, limiting the development of more general keypoint-conditioned generation.
> Q2: Results on multi-instance subset.
>
We present results on the **multi-instance subset** below. Our method still performs well, while baseline models decline, demonstrating that **UNIMC** performs better in multi-class scenarios.
| Methods | FID ⬇️ | KID ×1k ⬇️ | CLIP **⬆️** | Human Class **⬆️** | Animal Class **⬆️** | Human AP **⬆️** | Animal AP **⬆️** |
| --- | --- | --- | --- | --- | --- | --- | --- |
| **ControlNet** | 27.03 | 9.15 | 31.96 | 90.65 | 31.51 | 26.47 | 9.86 |
| **GLIGEN** | 32.51 | 12.05 | 31.04 | 80.19 | 6.31 | 25.44 | 1.58 |
| **UNIMC** | **23.51** | **7.59** | **32.20** | **93.56** | **91.50** | **29.39** | **28.67** |
> Q3: Results on COCO+APT testing datasets.
>
We have added results on the **COCO+APT testing set**, and similar to HAIG, our method demonstrates strong performance and surpasses all baseline methods.
| Methods | FID ⬇️ | KID ×1k ⬇️ | CLIP **⬆️** | Human Class **⬆️** | Animal Class **⬆️** | Human AP **⬆️** | Animal AP **⬆️** |
| --- | --- | --- | --- | --- | --- | --- | --- |
| **ControlNet** | 25.00 | 9.12 | 30.02 | 90.01 | 30.09 | 26.92 | 9.50 |
| **GLIGEN** | 32.41 | 12.09 | 29.32 | 80.10 | 5.09 | 24.49 | 0.90 |
| **UNIMC** | **24.09** | **8.01** | **30.06** | **93.42** | **92.01** | **29.65** | **28.30** |
> Q4: Performance with and without multiple instances in the dataset.
>
The training performance of HAIG with and without multiple instances shows that adding multi-instance images improves all metrics.
| Variants | FID ⬇️ | KID ×1k ⬇️ | CLIP **⬆️** | Human Class **⬆️** | Animal Class **⬆️** | Human AP **⬆️** | Animal AP **⬆️** |
| --- | --- | --- | --- | --- | --- | --- | --- |
| **W/O multiple instances** | 25.00 | 8.06 | 31.00 | 91.04 | 89.09 | 24.44 | 22.65 |
| **W multiple instances** | **23.63** | **7.57** | **32.28** | **93.55** | **91.71** | **30.01** | **28.38** |
> Q5: Comparison with **[1]MIGC** and **[2]InstanceDiffusion**.
>
The table below presents the **quantitative comparison** with **MIGC** and **InstanceDiffusion**. **MIGC** controls via bounding boxes and classes, while **InstanceDiffusion** can control through bounding boxes, points, and classes. Compared to previous baseline methods, both of these models show significant improvements in class control accuracy. InstanceDiffusion, which adds point control, also improves keypoint control accuracy.
| Methods | FID ⬇️ | KID ×1k ⬇️ | CLIP **⬆️** | Human Class **⬆️** | Animal Class **⬆️** | Human AP **⬆️** | Animal AP **⬆️** |
| --- | --- | --- | --- | --- | --- | --- | --- |
| **MIGC** | 23.78 | 8.06 | 31.29 | 92.06 | 88.04 | 17.04 | 15.99 |
| **InstanceDiffusion** | 24.06 | 9.17 | 30.99 | 89.70 | 89.00 | 21.06 | 20.04 |
| **UNIMC** | **23.63** | **7.57** | **32.28** | **93.55** | **91.71** | **30.01** | **28.38** |
The **quantitative comparison** can be seen in [Fig.5](https://ibb.co/mFCJztLW).
> Q6: What is the number of images used for FID comparison?
>
As mentioned in the **Implementation Details** section, we used the **HAIG-2.9M testing set** for evaluation, with **1,224** images.
> Q7: Why not annotate with the ensemble of models?
>
Relevant works, such as Pixart-sigma, have shown that using a better annotation model improves the model's performance. Additionally, animal pose estimation is not as mature as human pose estimation, and only selecting the best model ensures that we can achieve usable results.
> Q8: Bias in Evaluation.
>
Thank you for your insightful question. Please refer to **Q3** — our method also achieves strong performance on human-annotated datasets, which supports the **validity of our evaluation results** and the **reliability of our evaluation methodology**.
> Q9: Difference from coordinate regression tasks.
>
Our task is to extend **keypoint-conditioned image generation** from a single category (human) to a **multi-class setting**. To achieve this, we use keypoint **coordinates as conditional inputs** to guide image generation. In contrast, prior works on **coordinate regression** use coordinates as **outputs**, not as conditioning inputs. While these works demonstrate that **DiT** can model coordinate distributions, they do not show that DiT can **generate images conditioned on coordinates**, which is the core challenge of our work. Therefore, the prior use of DiT for coordinate regression is only loosely related to our task and does not diminish the novelty of our proposed approach.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the rebuttal. It addressed most of my concerns. I am increasing my score to Weak Accept. I would recommend the authors to include the additional results and comparisons in the revised version.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer Rf97,
Thank you sincerely for taking the time to review our rebuttal and for thoughtfully considering our clarifications. We are especially grateful that you found our additional experiments and comparisons helpful, and we truly appreciate your updated score.
Your comments throughout the review process—particularly the suggestions regarding additional comparisons and ablation studies—have been instrumental in helping us strengthen the paper. We will make sure to include the extended results and analyses in the revised version as you recommended.
If there is anything further you would like to discuss or clarify, we would be more than happy to engage further.
Sincerely,
The Authors | Summary: This paper proposes a framework (UNIMC) that generates images containing multiple objects (including humans and various other entities) by leveraging joint keypoints and introduces a large-scale dataset (HAIG-2.9M) to support this approach. Unlike conventional keypoint-based image control methods, UNIMC utilizes a unified keypoint encoder that encodes keypoints of various classes and instances into a shared representation space, enabling more precise multi-class and multi-instance image generation. By leveraging a large-scale dataset, the proposed method enhances the performance of keypoint-based image generation models, particularly in multi-object and occlusion-heavy scenarios, while maintaining high image quality.
Claims And Evidence: • The paper claims that the proposed method outperforms existing keypoint-conditioned image generation models such as GLIGEN and ControlNet, which generate images containing multiple objects. This claim is supported by quantitative evaluations in the experimental results, where various metrics demonstrate the superiority of the proposed approach.
• In occlusion scenarios, qualitative evaluation results show that objects in the generated images maintain accurate poses by leveraging per-object keypoints.
• The proposed dataset can be effectively utilized for keypoint-based image generation and contains over ten times more images and objects compared to major existing datasets such as COCO and Human-Art, while also maintaining high quality. This is detailed in Table 1 of the paper.
Methods And Evaluation Criteria: The proposed method introduces a unified keypoint encoder trained to generate high-quality images that accurately distinguish various classes and multiple objects. Previous approaches struggled with keypoint-based control due to inconsistencies in keypoint formats across classes and challenges in differentiating objects within the same class. This method effectively addresses these issues, marking a significant improvement.
Unlike existing models that rely solely on joint keypoints for generating multi-class and multi-object images, the proposed approach takes a different direction. By integrating both keypoint and class information during training, it offers a more structured and intuitive solution. This innovation appears both practical and effective. However, a limitation of the study is that it does not verify whether the method can be applied to other models, leaving room for further validation.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The paper effectively employs performance metrics commonly used in generative models and conducts both quantitative and qualitative evaluations. By comparing various datasets and models, the experiments are well-structured and appear valid. The results demonstrate that UNIMC maintains accurate poses using per-object keypoints, even in occlusion scenarios, highlighting its robustness.
However, the study lacks an analysis of training and inference speed, which are crucial factors for practical application. Providing additional details on these aspects would enhance the clarity of the research. Furthermore, examining whether this method can be applied to other diffusion models would help establish it as a more generalized approach.
Supplementary Material: The supplementary materials effectively highlight the significance of the dataset through experimental analysis. The results demonstrate that a larger dataset and precise annotations are crucial for enhancing model performance. Additionally, a user study was conducted to assess the model’s quality compared to existing approaches, successfully showcasing its improvement in performance.
Relation To Broader Scientific Literature: The paper presents a detailed comparison with ControlNet and GLIGEN, effectively illustrating the limitations of existing methods. It directly evaluates the proposed model against PIXART-α, its base model, as well as other keypoint-conditioned image generation models like ControlNet and GLIGEN. However, a broader comparison with various text-based image generation models would have further strengthened the study.
Essential References Not Discussed: While the paper compares various models and their generative performance, it does not provide a thorough comparison with the latest Visual-Language Model (VLM)-based image generation approaches. Incorporating both qualitative and quantitative evaluations against state-of-the-art models in recent research trends would have broadened the study’s scope and enhanced its credibility.
Other Strengths And Weaknesses: Strengths
This study introduces a new dataset designed for multi-class and multi-object image generation based on joint keypoints, addressing the traditional limitation where such datasets were primarily designed for recognition tasks. To ensure accurate annotations, even in images with overlapping objects, bounding boxes, keypoints, and captions were incorporated, ultimately enhancing the quality of image generation.
Weaknesses
While the paper demonstrates the superiority of the proposed method through comparative experiments with GLIGEN and ControlNet, it lacks a detailed analysis explaining why it outperforms existing approaches. Additionally, a comparison with Fusion Encoder-based Text-to-Image models, which align with recent research trends, would have further strengthened the study’s impact. Furthermore, while the HAIG-2.9M dataset and the UNIMC model are repeatedly emphasized as key contributions, additional novel elements would help further differentiate this work from prior research.
Other Comments Or Suggestions: • The contributions of the paper are somewhat repetitive, and it would be beneficial to highlight additional novel contributions.
Questions For Authors: .
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Dear Reviewer 2vGV,**
Thank you for your review and constructive comments. During the rebuttal period, we have made every effort to address your concerns. The detailed responses are provided below:
> Q1: Applicability to other models.
Our method is designed to be compatible with **almost all DiT-based architectures**, due to the following reasons:
1. The **Unified Keypoint Encoder** is model-agnostic and does not rely on any specific architectural design.
2. The **Timestep-aware Keypoint Modulator** is implemented using only self-attention and does not depend on any model-specific components.
To demonstrate the general applicability of our method, we conducted experiments on [Hunyuan-DiT](https://github.com/Tencent/HunyuanDiT). The specific experimental results are shown in the table below. Our method successfully equips **Hunyuan-DiT** with the ability to perform **keypoint-conditioned multi-class image generation**.
| **Methods** | **Human Class ⬆️** | **Animal Class ⬆️** | **Human AP ⬆️** | **Animal AP ⬆️** |
| --------------------- | ----------------- | ------------------ | -------------- | --------------- |
| Hunyuan-DiT | 25.49 | 16.77 | 0.26 | 0.14 |
| **Hunyuan-DiT+UNIMC** | **94.06** | **91.89** | **30.92** | **28.75** |
> Q2: Analysis of training and inference speed.
Please refer to **R1-Q1** and **R1-Q4**. Our method is highly efficient in both training and inference.
Specifically, it requires only **576 A800 GPU-hours** to complete training. At inference time, it is **more efficient than GLIGEN** and **comparable to ControlNet**. Moreover, our method can quickly achieve strong performance even when fine-tuned on small-scale datasets.
> Q3: Comparison with various text-based and Visual-Language Model (VLM)-based image generation models.
Our method is orthogonal to general text-to-image (T2I) or Visual-Language Model (VLM)-based image generation approaches. These models lack the ability to generate images conditioned on **keypoints from different object categories**. In contrast, we introduce a framework that enables **keypoint-conditioned multi-class image generation**.
The table below provides a comparison between **UNIMC** and recent state-of-the-art T2I models. Notably, our method achieves **substantial improvements in Human/Animal class accuracy and keypoint-level AP**, demonstrating its superiority in controllability and structural alignment.
| **Methods** | **Human Class ⬆️** | **Animal Class ⬆️** | **Human AP ⬆️** | **Animal AP ⬆️** |
| ----------- | ----------------- | ------------------ | -------------- | --------------- |
| Flux.1-dev | 25.47 | 20.01 | 0.27 | 0.13 |
| Kolors | 24.71 | 18.90 | 0.20 | 0.15 |
| **UNIMC** | **93.55** | **91.71** | **30.01** | **28.38** |
[Fig.4](https://ibb.co/Tq4WQWqv) presents a qualitative comparison of **UNIMC** with **Flux** and **Kolors**. As shown, the latter two models, relying solely on the prompt, fail to generate images that align with the provided keypoints.
> Q4: Why our method outperforms prior work in keypoint-conditioned multi-class image generation?
The core objective of our method is to enable keypoint-conditioned multi-class image generation.
To this end, we introduce a **Unified Keypoint Encoder** that explicitly models the relationship between object categories and their corresponding keypoints, and a **Timestep-aware Keypoint Modulator** that enables fine-grained, keypoint-level feature modulation.
In contrast to prior methods, our approach effectively addresses the challenges of **class binding fusion** and **keypoint binding fusion**, allowing the model to learn precise keypoint control across multiple classes simultaneously.
As a result, **UNIMC achieves significant improvements over baseline methods** on this task.
> Q5: Additional contributions beyond HAIG-2.9M and UNIMC.
Our primary contribution is extending **keypoint-conditioned image generation** from a single class (human) to a **general multi-class setting**. To achieve this goal, we propose both the **HAIG-2.9M dataset** and the **UNIMC model**, which together enable **keypoint-conditioned multi-class image generation** across diverse object categories. | Summary: The paper introduces UNIMC, a unified Diffusion Transformer framework for keypoint-guided multi-class image generation, and HAIG-2.9M, a large-scale dataset with 786K images and 2.9M instance annotations covering humans and 30 animal classes. UNIMC addresses limitations in existing approaches by using explicit class names, bounding boxes, and keypoint coordinates instead of skeleton images, which solves "class binding confusion" and "instance binding confusion" problems. The framework employs a unified keypoint encoder that maps different species' keypoints into a shared representation space and a timestep-aware keypoint modulator that injects keypoint tokens into the DiT backbone for fine control. Experiments demonstrate UNIMC outperforms previous methods like ControlNet and GLIGEN in image quality metrics, class accuracy, pose accuracy, and handling complex scenarios with multiple overlapping humans and animals.
Claims And Evidence: The primary claims and their supporting evidence include:
1. **UNIMC's effectiveness for keypoint-guided generation**: The authors provide extensive quantitative metrics (FID, KID, CLIP scores, class accuracy, pose accuracy) in Tables 3 and 4 showing UNIMC outperforms baseline methods. This is reinforced by qualitative examples in Figures 6-8 that visually demonstrate the model's capabilities.
2. **Superiority of HAIG-2.9M dataset**: Table 1 provides a clear comparison with existing datasets, showing HAIG-2.9M's advantages in scale, diversity, and annotation quality. The benefits of training on this dataset are evidenced in Table 4, showing significant performance improvements compared to training on COCO, APT36K, or their combination.
3. **Effectiveness of the unified keypoint encoder and timestep-aware keypoint modulator**: The ablation studies in Table 7 methodically compare different configurations, showing Config a (the proposed approach) consistently outperforms alternatives across all metrics.
4. **Solution to class binding and instance binding confusion**: Qualitative examples in Figures 6-8 demonstrate the model's ability to correctly generate the appropriate class and manage multiple instances, especially in rows with overlapping subjects.
5. **Human preference**: Tables 8 and 9 provide human evaluation results that align with the quantitative metrics, strengthening the claim that UNIMC produces better results.
The only claim that could benefit from stronger evidence is the model's efficiency, which is mentioned but not extensively benchmarked in terms of computational requirements or training/inference times compared to alternatives.
Methods And Evaluation Criteria: The methods and evaluation criteria in this paper are well-aligned with the problem of keypoint-guided multi-class image generation.
**Methods:**
- Using explicit class names, bounding boxes, and keypoint coordinates instead of skeleton images directly addresses the identified limitations of previous approaches
- The DiT-based architecture is appropriate for high-quality image generation
- The unified keypoint encoder sensibly maps different anatomies into a shared representation space
- The HAIG-2.9M dataset is particularly well-suited as it directly addresses the identified gap in existing datasets by providing joint annotations for both humans and animals across diverse scenes.
**Evaluation Criteria:**
- Image quality metrics (FID, KID) are standard and appropriate
- Text-image alignment (CLIP) sensibly evaluates prompt faithfulness
- Class accuracy using YOLO-World effectively assesses whether correct subjects were generated
- Pose accuracy (AP) appropriately measures keypoint control fidelity
- Human preference studies validly capture subjective quality assessments
- Comparison against relevant baselines (ControlNet, GLIGEN) provides meaningful context
- Ablation studies effectively isolate component contributions
Theoretical Claims: The paper doesn't contain any formal mathematical proofs or theoretical claims requiring verification.
Experimental Designs Or Analyses: The experimental designs and analyses appear sound and appropriate. The paper uses relevant baselines (PIXART-α, ControlNet, GLIGEN), comprehensive metrics (image quality, alignment, class/pose accuracy), systematic ablation studies, cross-dataset comparisons, and human evaluations. The experiments effectively isolate component contributions and demonstrate UNIMC's advantages.
Supplementary Material: I reviewed their appendix and provided code.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: ## Other Weaknesses:
1. **Computational Requirements**: The paper lacks discussion of computational costs and efficiency. Information about training time, inference speed, and resource requirements would help assess practical applicability.
2. **Limited Diversity of Animal Species**: While 30 animal classes is a good start, it still represents a limited subset of the animal kingdom. Discussion of how the approach might generalize to unseen species would strengthen the paper.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Could you provide details about the computational requirements of UNIMC compared to baseline methods? Specifically, what are the training time, memory requirements, and inference speed differences between UNIMC and alternatives like ControlNet and GLIGEN?
2. How well does the model generalize to unseen animal species not present in the HAIG-2.9M dataset? For instance, if provided with keypoints for a species like a kangaroo or platypus that wasn't in the training data, how would UNIMC perform?
3. Did you observe any systematic failure cases or limitations of UNIMC? Understanding typical failure modes would help assess the robustness of the approach.
4. The paper mentions using 8 A800 GPUs for training. What would be the minimum computational setup required to fine-tune UNIMC on a smaller dataset for a specific application?
5. How did you ensure the quality of keypoint annotations in HAIG-2.9M, particularly for animal species where keypoint definitions might be less standardized than for humans? What was the process for handling edge cases or ambiguous keypoint placements?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Dear Reviewer kiWJ,**
Thank you for your review and constructive comments. During the rebuttal period, we have made every effort to address your concerns. The detailed responses are provided below:
> Q1: Details about the computational requirements of UNIMC compared to baseline methods.
>
Each method is trained using different datasets and GPU configurations, and their reported training costs are measured in different formats. For example, **ControlNet** reports GPU-hours, while **GLIGEN** uses steps * batch size. Therefore, directly comparing training costs across methods is not meaningful. In the table below, we report the **officially reported training costs of baseline methods**, along with our method’s training cost in both formats. Despite achieving significantly better performance than the baselines, our method maintains a relatively low training cost. For inference, all methods are evaluated on a **single RTX 3080 Ti** with **FP16 precision**, using a **50-step DDIM sampler** and **Flash-Attention-v2**. The generation resolution is **512×512**.
Our method is more efficient than ControlNet, and although the inference time and memory usage are slightly higher than those of GLIGEN, the performance improvements over GLIGEN are substantial.
| **Method** | **Training cost** | **Inference time (s/image)** | **Inference memory (GB)** |
| --- | --- | --- | --- |
| ControlNet | 300 GPU-hours with Nvidia A100 80G | 4.51 | 5.79 |
| GLIGEN | Steps = 500K, Batch size = 32 | 4.01 | 4.90 |
| UNIMC (Ours) | Steps = 8K, Batch size = 256 ≈ **576 GPU-hours** with Nvidia A800 80G | 4.23 | 5.08 |
> Q2: How well does the model generalize to unseen animal species not present in the HAIG-2.9M dataset?
>
Although our model is trained on only **31 classes**, it generalizes well to unseen categories due to two key factors:
1. the **structural similarity across different species**, and
2. our use of a **text encoder** for class representation, which allows for extension to arbitrary categories.
In addition, the underlying **pretrained T2I model** is inherently capable of class-to-image generation for a wide range of categories. Together, these components enable our method to theoretically scale to any class.
Taking **kangaroo** as an example, we observe that its keypoint structure is similar to that of humans. Thus, we reused human keypoints to generate a kangaroo. As shown in [Fig.1](https://ibb.co/9mfwbtc8), the generated pose and spatial structure roughly align with the keypoints, suggesting that our model possesses a certain degree of generalization to **unseen species**.
Therefore, we believe that as long as the training set covers a sufficient diversity of **structurally representative animal classes**, our model can generalize to a broad range of animal species.
> Q3: Failure cases or limitations of UNIMC.
>
We observed that for some categories, a few keypoints are not perfectly controllable. For example, in the case of **cat** shown in [Fig.2](https://ibb.co/qLYjrjxn), the tail fails to be accurately generated. Beyond such rare cases, we did not observe significant failure cases in the test set.
> Q4: What would be the minimum computational setup required to fine-tune UNIMC on a smaller dataset for a specific application?
>
Our use of 8 A800 GPUs was intended to accelerate training on a large-scale dataset. However, the UNIMC model itself is lightweight, with only **0.93B parameters** (and **0.35B trainable**). Under FP16 precision and batch size = 1, it requires only **12GB of GPU memory**.
We fine-tuned UNIMC on a dataset of **1,000 cat images** using a **single RTX 3090**, with **batch size = 4** and **2K training steps**, taking about **4 hours**. The resulting **AP for the cat category reached 28.05**, and qualitative results are shown in [Fig.3](https://ibb.co/d4659mqZ).
> Q5: How did you ensure the quality of keypoint annotations in HAIG-2.9M?
>
As described in Section 4.2, we first randomly sampled **5K images** and annotated them using several expert models. We then conducted a **user study** to select the best-performing model. As a result, our keypoint annotations reach the upper bound achievable by current pretrained models.
Moreover, the animal species in our dataset are already included in the training data of the selected **pretrained keypoint estimator**. During annotation, we manually **sample and discard images with low-quality annotations**, and we also filter out images where keypoint predictions have **low confidence**.
To further reduce **ambiguous keypoint placements**, we feed the **bounding box** **detection results** into the keypoint estimator and **restrict keypoint detection to the corresponding bounding boxes.** This ensures that the estimator focuses only on the relevant regions, thereby improving annotation accuracy and reliability. | null | null | null | null | null | null |
Exploring Invariance in Images through One-way Wave Equations | Accept (poster) | Summary: This paper draws connections between recurring regression using first-order norm + linear autoregression and the discretized one-way wave equation. Through this framework the paper proposes a model to embed images and provides empirical evidence supporting the model performance for image reconstruction. Moreover, the paper provides supporting evidence that this model is more memory efficient than existing methods and provides improved speed under parallelization.
## update after rebuttal
In my opinion, the rebuttal has done a great job in the following aspects:
- The acknowledgement of overstatements and willingness to revise these statements
- Providing additional analysis on the diagonalizability and invertibility of the matrices key to their approach.
Given the good PSNR and poor FID scores along with a lack of theoretical connections, it remains unclear if traveling waves reveal a promising new avenue for autoregression on images. However, the paper makes a good case of empirical evidence supporting this approach.
Claims And Evidence: The central claim of the paper is the following: *Images share a set of one-way wave equations in the latent feature space*. This is evidenced through empirical comparison of on peak signal-to-noise ratio on the ImageNet and Kodak datasets. However, **this strong statement lacks strong theoretical support.** Moreover, simple image regression models achieve high PSNR by blurring the image which appears to be the evidence illustrated in Figure 2 as well as the reconstructions in the appendix. Without additional standard metrics (FID/IS/SSIM) for comparison, these empirical results are also insufficient to support the papers central claim.
In addition, the paper makes claims that this model is computationally more efficient in both speed and memory than existing methods. This is evidenced by comparing the dimension of the latent state, number of latent parameters and the bits per pixel across several datasets. This paper makes a claim that the model is capable of providing computational speed ups using parallelization. This is evidenced by inference time comparisons on a single image.
Methods And Evaluation Criteria: The proposed evaluation criteria and methods make sense and are necessary for evaluating the current model. However, they are insufficient in supporting the papers claim.
Theoretical Claims: I have checked the derivation of the one-way wave equation from the FINOLA architecture. This derivation relies on several key assumptions such as the invertibility and diagonalizability of general weight matrices. As a note, these derivations lack some small but important details such as the commutativity of $V^{-1}$ with $\nabla_x,\nabla_y$,. The larger issue is that this derivation demonstrates a property of the network, that it learns a fixed wave speed, rather than a property of images. This would be significantly improved if the claim were relaxed or proper theoretical propositions were provided about the dynamics of the underlying latent state.
Experimental Designs Or Analyses: I did not verify the experimental analysis as I did not find any supplementary material provided.
Supplementary Material: I did not find any supplementary material provided.
Relation To Broader Scientific Literature: This work demonstrating a network which admits a latent traveling wave solution mirrors some related works which demonstrate the capacity to generate traveling wave solutions in the latent state of neural networks [1]. This paper offers some novelty by using the FINOLA method and discretizing solely about the spatial differences. However, the central claim that all images should admit a common spatial wave-equation with spatially invariant weights appears at odds with recent techniques which perform image segmentation with traveling waves and demonstrate that unique spatiotemporal patterns emerge for each object in an input image [2].
[1] Keller, T. Anderson, et al. _TRAVELING WAVES ENCODE THE RECENT PAST AND ENHANCE SEQUENCE LEARNING_. 2024.
[2] Liboni, Luisa H. B., et al. _Image Segmentation with Traveling Waves in an Exactly Solvable Recurrent Neural Network_. arXiv:2311.16943, arXiv, 28 Nov. 2023. _arXiv.org_, [https://doi.org/10.48550/arXiv.2311.16943](https://doi.org/10.48550/arXiv.2311.16943).
Essential References Not Discussed: The paper thoroughly explores the use and impact of autoregressive models, but under explores of the importance or impact of traveling waves in neural networks and image processing. See above for some example references.
Other Strengths And Weaknesses: **Strengths:** To the best of my knowledge the derivation of the one-way wave equation from FINOLA is a novel derivation. In addition there are a substantial number of tasks and results presented.
**Weaknesses:** Strong central claim without appropriate theoretical support. Insufficient metrics to adequately determine the full performance of the architecture.
Other Comments Or Suggestions: Some typos like line one of the abstract: *we empirically reveals an invariance over images*.
Questions For Authors: Can you provide more detailed theoretical insights or formal proofs that justify the assumption that images naturally conform to a one-way wave equation in the latent feature space?
Beyond PSNR, have you evaluated the model using perceptually motivated metrics (e.g., SSIM, FID, or IS)?
Is it possible to report the wave speed itself? Have you considered a framework where the wave speed varies spatially or by image rather than being invariant?
Given that the derivation relies on assumptions such as the invertibility and diagonalizability of weight matrices, how sensitive is the model’s performance to these assumptions? Specifically, have you examined the condition numbers of matrices B or Q in practice, and what impact do they have on the stability and performance of the model?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's valuable feedback.
$\color{blue}{[Question-1]:}$
**Can you provide more detailed theoretical insights or formal proofs that justify the assumption that images naturally conform to a one-way wave equation in the latent feature space?**
We acknowledge the challenge of providing a rigorous theoretical proof for this assumption. Therefore, our initial approach has been to establish strong empirical support. The successful reconstruction achieved by FINOLA, relying solely on local structure (first-order and linear after normalization), serves as compelling empirical evidence.
Furthermore, we have empirically validated key mathematical properties associated with the model. Specifically, across multiple training runs, we have confirmed:
* The invertibility of matrices $\bf{A}$ and $\bf{B}$.
* The diagonalizability of $\mathbf{AB}^{-1}$.
We believe that these empirical observations warrant further investigation and encourage the community to pursue theoretical proofs that can provide a deeper understanding of the underlying principles.
---
$\color{blue}{[Question-2]:}$
**Beyond PSNR, have you evaluated the model using perceptually motivated metrics (e.g., SSIM, FID, or IS)?**
This question is also addressed in our response to Reviewer tmjQ's Question 2. Please refer to that response for a detailed discussion of FID results.
To summarize, while FINOLA demonstrates strong performance in terms of PSNR, its results are less competitive with respect to FID. We attribute this discrepancy to our intentional use of the $L_2$ loss function, which was prioritized for its simplicity to emphasize that FINOLA's effectiveness does not rely on complex reconstruction loss formulations. We acknowledge that incorporating perceptual loss functions and Generative Adversarial Network (GAN) losses, as employed by models such as VQGAN, ViT_VQGAN, and Stable Diffusion, could potentially enhance FID scores.
---
$\color{blue}{[Question-3]:}$
**Is it possible to report the wave speed itself? Have you considered a framework where the wave speed varies spatially or by image rather than being invariant?**
The magnitude of the wave speed (represented by the eigenvalues of $\mathbf{AB}^{-1}$) for dimension 512 is provided at
https://tinyurl.com/2r4s8xs3. This speed value ranges from 0.5 to 1.4.
We appreciate the reviewer's suggestion. While we acknowledge that reconstruction quality could potentially be enhanced by allowing the wave speed to vary spatially or across different images, this is beyond the scope of the present study. Our current investigation focuses on the scenario where wave speed remains invariant across spatial positions and images, with image variations limited to the initial condition (or compressed representation $\bf{q}$). Exploring the incorporation of spatially or image-dependent wave speeds represents a promising avenue for future research.
---
$\color{blue}{[Question-4]:}$
**Given that the derivation relies on assumptions such as the invertibility and diagonalizability of weight matrices, how sensitive is the model’s performance to these assumptions? Specifically, have you examined the condition numbers of matrices B or Q in practice, and what impact do they have on the stability and performance of the model?**
While the derivation relies on the assumptions of matrix invertibility and diagonalizability, these are not explicitly enforced during model training but are validated through post-training analysis. The derivations offer a mathematical interpretation of FINOLA but do not directly affect model performance.
**Empirical Validation:** We have empirically validated the invertibility of matrices $\bf{A}$ and $\bf{B}$ across multiple training runs. Furthermore, we have confirmed the diagonalizability of $\mathbf{AB}^{-1}$.
**Condition Number:** The eigenvalue spectra of matrices $\bf{A}$ and $\bf{B}$, related to their condition numbers (92 and 151, respectively), are available at https://tinyurl.com/3xz2e5am.
**Sensitivity Analysis:** Despite the large condition number, FINOLA is stable for the perturbations in the compressed representation space ($\bf{q}$). To evaluate perturbation sensitivity, we performed a controlled experiment where the compressed representation ($\bf{q}$) was perturbed. Perturbed representations ($\bf{q}$$_p$) were generated by linearly interpolating between the image representation ($\bf{q}$$_i$) and and a random noise vector ($\bf{q}$$_n$):
$\mathbf{q}_p=(1-p)\mathbf{q}_i+p\mathbf{q}_n$
The perturbation level, $p$, ranged from 0 (no perturbation) to 1.0 (full perturbation) in steps of 0.1. FINOLA and the decoder then processed these perturbed representations to reconstruct images.
The result images (see https://tinyurl.com/47whcsve) show that FINOLA is robust to small perturbations ($p$=0.1 or 0.2), but reconstruction quality diminishes with increasing perturbation.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their responses. I acknowledge the empirical evidence demonstrating that the model learns a traveling wave solution common among images because upon deep inspection it turns out that $A$ and $B$ are invertible and $AB^{-1}$ is diagonalizable.
However, I have remaining concerns regarding the strong claims introduced in the paper and the provided empirical evidence. In particular, the authors claim that their method reveals inherent properties of images: that they share a particular one-way wave equation. However, this appears to be more indicative of the model selection than a property of the images. Moreover, the high PSNR and low FID scores are more indicative of blurry images that may be a result of numerical diffusion in the discretization of the one-way wave equation.
I would be inclined to raise my score if the language regarding the inherent property of images were relaxed.
---
Reply to Comment 1.1.1:
Comment: Thank you for your insightful feedback regarding our claim about inherent image properties. We agree with your assessment: relaxing the claim that images inherently share a specific one-way wave equation is more accurate and better reflects the empirical nature of our findings.
Therefore, we propose the following modifications to relax the claim:
**Empirical Finding**: We demonstrate empirically that the FINOLA model successfully learns a highly compressed image representation ($\bf{q}$), from which images can be effectively reconstructed using a remarkably simple, spatially consistent autoregressive process (first-order and linear after normalization). This is interesting as the compressed representation captures essential image information which only relies on simple, local relationships for successful reconstruction.
**Model Interpretation**: Our empirical analysis shows the learned FINOLA matrices $\bf{A}$ and $\bf{B}$ are typically invertible and $\mathbf{AB}^{-1}$ is diagonalizable, after multiple training runs. We will clarify that this allows for an interpretation where, after diagonalization, the model's dynamics resemble a set of one-way wave equations, with $\bf{q}$ as the initial condition. This is presented as an interesting perspective on the learned solution, not a proven inherent property of images themselves.
**Acknowledge Theoretical Gap**: We acknowledge this wave equation perspective currently lacks rigorous theoretical proof, despite the model's strong empirical performance. We hope our work encourages further theoretical investigation.
We commit to thoroughly revise the language throughout the manuscript to consistently reflect this more nuanced framing, focusing on the empirical capabilities of the FINOLA model and the interpretive nature of the wave equation connection. We believe these changes directly address your main concern. | Summary: The authors introduce a new image encoder–decoder architecture called First Order Norm + Linear Autoregression (FINOLA). They compare it with several other image representations—including the Discrete Cosine Transform, the Discrete Wavelet Transform, and various generative models—and report that FINOLA achieves favorable results. They also test their encoder on a range of additional tasks, such as image inpainting, outpainting and self-supervised learning on Imagenet-1k dataset. However, the reviewer is not fully understood few things: (1) Image invariances (2) Line 260 contradicts the proposed method (3) the effect of perturbation on first order nonlinear differential equation
Claims And Evidence: The reviewer observes a significant discrepancy between the theoretical claims made in the manuscript and their implementation in the accompanying code. Below are several specific points of concern:
Diagonalizability and Contradiction (Line 260 vs. Line 098):
The claim that 𝐴𝐵^{-1} is non-diagonalizable directly conflicts with the assumption that \Gamam is diagonal (as stated in line 102 and Equation (5)). In Figure 15, the \Gamma parameters fail to decorrelate, producing nearly identical outputs for both real and complex domains. This inconsistency undermines the theoretical foundation presented in the paper.
Impact of Small Perturbations (Equation 1):
The discussion on Equation (1) omits a key consideration: introducing a small perturbation or noise could potentially disrupt the entire generation process. Given that the model incorporates non-linearities, even minimal perturbations might significantly affect the final outputs. An analysis of the robustness of the method in the presence of noise would strengthen the manuscript.
Interpretability of Matrices
𝐴 and B:
The role and interpretability of matrices A and B remain unclear. The paper does not offer sufficient explanation or insight into how these matrices might be understood from either a theoretical or empirical perspective. Providing additional clarity—e.g., whether have specific geometric, statistical, or functional interpretations—would be beneficial.
Overall, while the paper addresses an interesting problem, the points above highlight critical gaps between the stated theory and the practical implementation. Further clarification or resolution of these issues is necessary to substantiate the claims made in the submission.
Methods And Evaluation Criteria: Yes, the method showcased their result on Imagenet-1k. The quantitative matrices are correct.
Theoretical Claims: The reviewer observes significant inconsistencies between the theoretical claims and their actual implementation. Specifically:
1. Contradiction Between Lines 260 and 098:
The manuscript asserts that $AB^{-1}$ is not diagonalizable, yet subsequently assumes that $\\Gamma$ is diagonal (line 102 and Equation (5)). Figure~15 further illustrates that the $\\Gamma$ values fail to decorrelate, yielding nearly identical outputs for both real and complex domains. This discrepancy raises questions about the validity of the stated theoretical framework.
2. Sensitivity to Small Perturbations (Equation 1):}
The paper does not sufficiently address the potential impact of noise or minor perturbations on the generation process. Given the reliance on non-linear transformations, even small perturbations could significantly alter the final results. An in-depth robustness analysis would strengthen the manuscript.
3. Role and Interpretability of Matrices $A$ and $B$:}
The purpose of matrices $A$ and $B$ remains unclear. The paper offers no clear explanation of how these matrices might be interpreted, either geometrically or statistically. Elucidating their function and providing interpretability would enhance the reader’s understanding of the proposed approach.
Experimental Designs Or Analyses: Comparison with Recent Methods
The proposed approach is fundamentally an autoencoder, yet it does not provide any comparison with other recent techniques, such as [1]. In particular, [1, 2, 3] have been shown to achieve lower parameter counts and better FID scores than the proposed model, making a direct empirical comparison crucial for a fair evaluation.
Reconstruction Quality
While the method appears capable of capturing lower-frequency components, it struggles to preserve high-frequency details in the reconstructed images. This is especially problematic for applications in medical imaging or scenarios involving small object generation (e.g., crowds, satellite imagery), where fine-grained details are important.
Robustness to Adversarial Attacks
It remains unclear whether the proposed model’s performance holds up under various dataset/domain shifts. For instance, color or rotational shifts, as well as operations like cutmix, may significantly alter the model’s outputs. Demonstrating robustness across these perturbations would strengthen the paper’s contributions.
Evidence of Image Invariances
The discussion of image invariances is not supported by sufficiently clear experiments or evidence. Additional empirical results demonstrating these invariances under different transformations or conditions would help substantiate this claim.
References
[1] Idempotent Generative Network, The Twelfth International Conference on Learning Representations, 2024, https://openreview.net/forum?id=XIaS66XkNA
[2] Swapping Autoencoder for Deep Image Manipulation, NeurIPS 2020.
[3] Concept Bottleneck Generative Models, ICLR 2024.
Supplementary Material: 1. Figure 11 Image interpolation: Image interpolation is not smooth like StyleGan-XL.
2. There is no results for the interpretability of A and B
Relation To Broader Scientific Literature: The work can be viewd in line of Idempotent Generative Network, The Twelfth International Conference on Learning Representations, 2024, https://openreview.net/forum?id=XIaS66XkNA, Swapping Autoencoder for Deep Image Manipulation, NeurIPS 2020., Concept Bottleneck Generative Models, ICLR 2024.
The method by nature is an image generator.
Essential References Not Discussed: [1] Idempotent Generative Network, The Twelfth International Conference on Learning Representations, 2024, https://openreview.net/forum?id=XIaS66XkNA
[2] Swapping Autoencoder for Deep Image Manipulation, NeurIPS 2020.
[3] Concept Bottleneck Generative Models, ICLR 2024.
Other Strengths And Weaknesses: The paper is well written
Dataset consider in this paper is good
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Contradiction Between Lines 260 and 098:
Does the assertion that $AB^{-1}$ is not diagonalizable conflict with the assumption that $\\Gamma$ is diagonal (line 102 and Equation (5)), especially when Figure~15 shows $\\Gamma$ failing to decorrelate and yielding nearly identical outputs for both real and complex domains?
2. Sensitivity to Small Perturbations (Equation 1):
Does the manuscript sufficiently address the impact of minor perturbations or noise on the generation process, given the reliance on non-linear transformations where even small perturbations could significantly alter the final results?
3. Role and Interpretability of Matrices $A$ and $B$:
Is there enough clarity about the purpose of matrices $A$ and $B$, including potential geometric or statistical interpretations, and how might elaborating on their function enhance the reader’s understanding of the proposed approach?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's valuable feedback.
$\color{blue}{[Question-1]:}$
**Contradiction Between Lines 260 and 098: Does the assertion that $\bf{AB}$$^{-1}$ is not diagonalizable conflict with the assumption that is diagonal (line 102 and Equation 5), especially when Figure~15 shows failing to decorrelate and yielding nearly identical outputs for both real and complex domains?**
We appreciate the opportunity to clarify this point. The confusion between Lines 260 and 098 is resolved as follows:
- Line 098: This line describes an empirical observation: "we empirically observed that $\bf{Q=AB}$$^{-1}$ matrices are diagonalizable after multiple trials of training." Subsequent derivations (Line 102 and Equation 5) are based on this observation.
- Line 260: This line reiterates that the observed diagonalizability of $\bf{AB}$$^{-1}$ is a natural outcome of the training process, not an enforced constraint. Consequently, while $\bf{AB}$$^{-1}$ is not theoretically guaranteed to be diagonalizable, it consistently exhibits this property in practice across training runs.
Furthermore, we wish to clarify the interpretation of Figure 15. The figure does not depict a failure to decorrelate. Instead, it illustrates two distinct variants of diagonalizable $\bf{AB}$$^{-1}$: one possessing complex eigenvalues and the other real eigenvalues. The complex eigenvalue variant demonstrates marginally superior performance (PSNR 26.1 vs. 25.1). The real eigenvalue case is achieved by constraining matrices $\bf{A}$ and $\bf{B}$ to be products of a shared real projection matrix $\bf{P}$ and distinct real diagonal matrices $\bf{H}$$_A$ and $\bf{H}$$_B$, respectively (i.e., $\bf{A=PH}$$_A$, $\bf{B=PH}$$_B$). This configuration ensures that $\mathbf{AB}^{-1}=\mathbf{P(H}_A\mathbf{H}_B^{-1})\mathbf{P}^{-1}$, where $\mathbf{H}_A\mathbf{H}_B^{-1}$ is a real diagonal matrix.
---
$\color{blue}{[Question-2]:}$
**Sensitivity to Small Perturbations (Equation 1): given the reliance on non-linear transformations where even small perturbations could significantly alter the final results?**
This is an excellent point. To assess the method's sensitivity to perturbations, we conducted an experiment where perturbations were introduced into the compressed representation space. Specifically, perturbed representations ($\bf{q}$$_p$) were generated through linear interpolation between an image's representation ($\bf{q}$$_i$) and a randomly sampled noise vector ($\bf{q}$$_n$):
$\mathbf{q}_p=(1-p)\mathbf{q}_i+p\mathbf{q}_n$
The interpolation parameter, $p$, was varied from 0 (no perturbation) to 1.0 (full perturbation) in increments of 0.1. The perturbed vectors ($\mathbf{q}_p$) were then processed by FINOLA and the decoder to reconstruct images.
The result images, available at https://tinyurl.com/47whcsve, demonstrate that FINOLA exhibits robustness to small perturbations ($p$=0.1 or $p$=0.2). However, reconstruction quality degrades as the perturbation level increases.
---
$\color{blue}{[Question-3]:}$
**Role and Interpretability of Matrices $\bf{A}$ and $\bf{B}$: including potential geometric or statistical interpretations, and how might elaborating on their function enhance the reader’s understanding of the proposed approach?**
Thank you for this question! Understanding the meaning of matrices $\bf{A}$ and $\bf{B}$ is key to grasping FINOLA's core mechanism.
**Definition:** Matrices $\bf{A}$ and $\bf{B}$ model the local structure of the latent vector space $\bf{z}$$(x,y)$, representing horizontal and vertical spatial transformations of the feature map:
* $\Delta_x\mathbf{z}(x,y)=\mathbf{A}\hat{\mathbf{z}}(x,y)=\mathbf{P}_A\mathbf{\Lambda}_A\mathbf{P}^{-1}_A\hat{\mathbf{z}}(x,y)$
* $\Delta_y\mathbf{z}(x,y)=\mathbf{B}\hat{\mathbf{z}}(x,y)=\mathbf{P}_B\mathbf{\Lambda}_B\mathbf{P}^{-1}_B\hat{\mathbf{z}}(x,y)$
where $\bf{P}$$_A$ and $\bf{P}$$_A$ are the eigenvector matrices, $\bf{\Lambda}$$_A$ and $\bf{\Lambda}$$_B$ are the corresponding diagonal eigenvalue matrices. We confirm $\bf{A}$ and $\bf{B}$ full rank and diagonalizable across multiple training runs.
**Interpretation:** Imagine points in the latent space connected by "springs". Matrices $\bf{A}$ and $\bf{B}$ define the stiffness of these horizontal and vertical springs, while eigenvectors ($\bf{P}$$_A$, $\bf{P}$$_B$) indicate their directions.
* Diagonalization: Diagonalizing $\bf{A}$ and $\bf{B}$ simplifies this, projecting latent space so each dimension (eigenvector) is an independent spring with stiffness (eigenvalue).
* Horizontal and Vertical Changes: The projected horizontal change ($\Delta_x\bf{z}$) and vertical change ($\Delta_y\bf{z}$) become scaling operations, with eigenvalues from $\bf{\Lambda}$$_A$ and $\bf{\Lambda}$$_B$ determining scaling strength along each eigenvector.
In essence, $\bf{A}$ and $\bf{B}$ encode directional "stretching/compressing" factors governing local feature map changes, with eigenvalues representing the strength of these factors. | Summary: This paper introduces an encoder-decoder framework that autoregressively reconstructs images using a first-order difference equation. The method achieves high-fidelity reconstruction, outperforms traditional encoding techniques, and is effective for self-supervised learning. The key contribution of the paper is the discovery that images share a common latent wave equation structure, offering a new perspective on image representation and reconstruction.
Claims And Evidence: The claims are supported by the experiments presented in the paper; however, some experiments could be strengthened by adding additional results. For example, while the paper discusses parameter efficiency, it does not analyze training time or inference speed.
Methods And Evaluation Criteria: The proposed methods and benchmarks align well with the problem of image reconstruction, compression, and self-supervised learning. The baseline and evaluation criteria make sense for the problem at hand; however, there is a lack of perceptual metrics for image reconstruction quality. The paper primarily uses PSNR, however alternative metrics, such as SSIM or LPIPS, could provide better insight into the realism of the reconstructed images. Additionally, while ImageNet-1K is indeed a comprehensive and widely used benchmark for evaluating image reconstruction, compression, and self-supervised learning, testing on additional datasets could still provide valuable insights.
Theoretical Claims: The paper does not provide explicit proofs for the theoretical claims made. Instead, the claims are primarily supported by experimental results, such as image reconstruction performance and comparisons with other methods like DCT, DWT, and convolutional autoencoders.
Experimental Designs Or Analyses: The experimental designs and analyses in the paper generally appear sound. The authors evaluate the proposed method (FINOLA) on ImageNet-1K for image reconstruction and compression tasks, using standard metrics like PSNR. They also compare FINOLA against established methods like DCT, DWT, and convolutional autoencoders. However, as mentioned previously, some potential concerns include the lack of perceptual metrics (e.g., SSIM, LPIPS) in the image reconstruction evaluations, which could better capture visual quality. Additionally, the experiments don't address training time or inference speed. Finally, while ImageNet is a solid starting point, incorporating additional datasets would help strengthen the claim of robustness across different real-world scenarios. For example, datasets like COCO or ADE20K, which include diverse scenes and object categories, might help assess the method's performance in more complex, real-world settings. Additionally, datasets with different image qualities (e.g., lower resolution or noisy images) could offer insights into how well the model handles such variations.
Supplementary Material: I quickly looked at the appendix when figures and tables were mentioned in the main text, while the code is not provided.
Relation To Broader Scientific Literature: The key contributions of the paper build on prior advancements in image reconstruction, compression, and self-supervised learning by introducing a novel method for efficient image representation through multi-path networks. It extends ideas from traditional image processing techniques, improving upon them by offering better PSNR performance at smaller latent sizes. Additionally, it advances the field of self-supervised learning by showing that Masked FINOLA can compete with existing methods like MAE and SimMIM.
Essential References Not Discussed: I am not aware of essential references not discussed.
Other Strengths And Weaknesses: The paper's strengths lie in its originality and in the experiments that demonstrate strong performance in image reconstruction while maintaining parameter efficiency, representing an improvement over existing approaches. However, the paper's readability could be improved. The flow of information is sometimes dense and mixed, making it challenging to follow. Furthermore, many important results and insights are placed in the appendix; relocating some of them to the main paper would enhance clarity and accessibility. A more structured presentation with clearer transitions between key ideas would further strengthen the work. Additionally, as previously noted, a key limitation is the lack of perceptual quality evaluation, as the paper primarily relies on PSNR. Finally, experimenting with additional datasets beyond ImageNet could help validate the method’s robustness across different domains.
Other Comments Or Suggestions: * There are some typos in the paper. For example in the abstract:
* "empirically reveals" → "empirically reveal"
* "to reconstruct image pixels" → "to reconstruct the image pixels"
* Reduce the size of tables (e.g. 1 to 4) and move some important results from the appendix to the main text
* It's not clear to me how to read the table in Figure 5
Questions For Authors: * Q1: While ImageNet is a comprehensive dataset, could the authors evaluate FINOLA on other datasets (e.g., ADE20K, or medical images) to assess generalization?
* Q2: The evaluation primarily relies on PSNR, which does not fully capture perceptual quality. Have the authors considered using other metrics?
* Q3: Qualitative results are reported on a limited number of images, could the authors provide results on different images?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful feedback and valuable suggestions, which have significantly improved the quality of our paper.
$\color{blue}{\textbf{[Question 1]:}}$
**While ImageNet is a comprehensive dataset, could the authors evaluate FINOLA on other datasets (e.g., ADE20K, or medical images) to assess generalization?**
Thank you for suggesting the importance of assessing generalization. To address this, we conducted a novel experiment applying FINOLA to computed tomography (CT) data.
**Task:** The objective of this experiment is to predict a CT image from its corresponding CT projection data. Both CT images and CT projection data are represented as 2D images.
**Dataset:** The CT dataset, comprised of brain CT scans, consists of 47,000 training samples (CT image and CT projection pairs) and 6,000 test samples. The images have a resolution of 256x256.
**FINOLA Implementation:** In this implementation, the CT projection data is initially encoded into a single vector, denoted as $\mathbf{q}_p$. Subsequently, two decoding branches are employed. The first branch utilizes FINOLA to generate a feature map from $\mathbf{q}_p$, followed by a decoder to reconstruct the CT projection data. The second branch linearly transforms $\mathbf{q}_p$ into a vector $\mathbf{q}_i$, which is then processed by FINOLA and a decoder to predict the corresponding CT image. Notably, the two decoding branches share the core processing components (FINOLA and the decoder) but operate on distinct, linearly correlated compressed representations ($\mathbf{q}_p$ and $\mathbf{q}_i$).
**Result:** The following table presents the results of the CT image prediction task. FINOLA outperforms the two baseline methods, demonstrating its capacity to generalize to a new task and dataset.
|Method| Mean Absolute Error $\downarrow$|
|---|---|
|InversionNet [1]|63.27|
|SIRT[2]|45.67|
|**FINOLA**|**31.95**|
[1] InversionNet: An efficient and accurate data-driven full waveform inversion. IEEE Transactions on Computational Imaging, 2019.
[2] Fast and flexible x-ray tomograph using the astra toolbox. Optics Express, 2016.
---
$\color{blue}{\textbf{[Question 2]:}}$
**The evaluation primarily relies on PSNR, which does not fully capture perceptual quality. Have the authors considered using other metrics?**
The table below presents a comparison of FINOLA with the ***first stage*** (learning an autoencoder and vector quantization in the latent space) of multiple generative methods, assessed using both PSNR and Fréchet Inception Distance (FID) metrics.
|Method|Latent Size| Channel|logit-laplace loss|$L_2$ loss|Perceptual loss|GAN loss|FID$\downarrow$|PSNR$\uparrow$|
|---|---|---|---|---|---|---|---|---|
| DALL-E |16x16|--|✓||||32.0|22.8|
| VQGAN |16x16|256|||✓|✓|4.98|19.9|
| ViT-VQGAN |32x32|32|✓|✓|✓|✓|1.28|--|
|Stable Diffustion|16x16|16|||✓|✓|**0.87**|24.1|
| **FINOLA (our)**|1x1|3072||✓|||27.8|**25.8**|
While FINOLA achieves good performance in terms of PSNR, its results are less competitive with respect to FID. This divergence is a consequence of our deliberate choice to employ the $L_2$ loss function. We intentionally prioritized a straightforward loss function to emphasize that FINOLA's efficacy does not depend on complex reconstruction loss formulations. To further optimize FID scores, we acknowledge the potential benefits of incorporating perceptual loss functions and Generative Adversarial Network (GAN) losses, as demonstrated in models such as VQGAN, ViT_VQGAN, and Stable Diffusion.
It is crucial to reiterate that FINOLA's primary objective differs from that of a generative model. Our focus is not on image generation per se, but rather on introducing a novel perspective for understanding images. FINOLA transforms an image into a compressed representation ($\mathbf{q}$) that enables a remarkably simple autoregressive process for image reconstruction, relying exclusively on first-order and linear (after normalization) local relationships.
---
$\color{blue}{\textbf{[Question 3]:}}$
**Qualitative results are reported on a limited number of images, could the authors provide results on different images?**
Additional FINOLA reconstructed images, encompassing diverse categories such as natural scenes, human portraits, facial images, animal photographs, air and land transportation, and medical images, are available at
https://tinyurl.com/4fdw47xm.
---
$\color{blue}{\textbf{[Other Comments]:}}$
Thanks for the helpful comments on writing and typos. We will address all these points in the final draft, including correcting the typos you noted, reducing the table sizes, moving the key results from the appendix, and improving the clarity of Figure 5.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their thorough rebuttal. After reading their response to my review, as well as the replies to other reviewers, I have decided to increase my score. I find the proposed approach to be novel, and the claims are well supported by extensive experimental results. That said, I am not fully confident in my assessment, as I am not familiar with the relevant literature.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive follow-up and for raising your score. We are pleased that our rebuttal effectively addressed your concerns. We appreciate your feedback throughout this process and confirm that the revisions discussed will be incorporated into the final manuscript. | Summary: This paper explores the invariance in images and proposes an encoder-decoder framework based on the first-order wave equation. It works by encoding each image into an initial condition vector and then passing it to a special decoder that transforms the first-order wave equation into a linear autoregressive process to generate high-resolution feature maps and reconstruct image pixels. This approach reveals a new perspective on image understanding and provides promising directions for further exploration.
Claims And Evidence: The claims in this submission are clearly and convincingly supported. The paper provides detailed experimental results and presents comparative results in the form of tables and graphs. In addition, the paper introduces new methods and techniques and tests them on multiple datasets to demonstrate their effectiveness. Therefore, it can be considered that the claims of the paper are well-founded and well-supported.
Methods And Evaluation Criteria: The model and evaluation criteria proposed in this paper are very reasonable for the problem and application being solved. This paper proposes a new image reconstruction framework that uses a wave equation-based method to generate high-resolution feature maps and reconstruct image pixels. At the same time, this paper also conducts extensive experimental validation of the proposed model, including comparison with other existing methods and performance analysis under different parameter settings.
Theoretical Claims: This paper proposes a new image reconstruction framework that uses a wave equation-based approach to generate high-resolution feature maps and reconstruct image pixels. Specifically, it is experimentally demonstrated that each image has a unique set of one-way wave equation solutions, and these solutions can be uniquely determined by initial conditions. This process is further transformed into a first-order norm plus linear autoregressive process, which enables efficient image reconstruction. The theoretical claims made in this paper are verified in experiments, so its theoretical correctness can be considered to be guaranteed.
Experimental Designs Or Analyses: The experimental design and analysis of this paper are reasonable. This paper adopts an intuitive encoder-decoder framework for image reconstruction and compares different models. In addition, this paper adjusts different hyperparameters, such as training time, number of layers, etc., to evaluate their impact on model performance. Finally, this paper also provides detailed experimental results and visualization charts to help readers better understand and evaluate the performance of the model.
Supplementary Material: The supplementary materials cover the limitations of the study, implementation details, and additional ablation studies.
Relation To Broader Scientific Literature: The main contribution of this paper is to reveal the invariance in images and propose an image reconstruction method based on the one-way wave equation. This discovery is closely related to existing research in the fields of image processing and computer vision. In these fields, people have been exploring how to extract useful features and structures from raw data and how to use these features to achieve various tasks such as classification, detection and segmentation. The wave equation theory proposed in this paper provides a new perspective that can help us better understand the nature and structure of images and provide new ideas and directions for future image processing and computer vision research.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
A new image coding method is proposed by converting the image into a one-dimensional vector and using a special decoder to generate a high-resolution feature map and reconstruct the image pixels. This method has a high reconstruction quality.
The fact that images share a set of one-way wave equations is discovered, which provides a completely new perspective to understand images and opens up the possibility for further exploration.
Weaknesses:
This method requires a lot of computing resources, especially when the feature map is high resolution. Therefore, it may be limited in practical applications.
This method is only applicable to a specific type of image, that is, images with linear structures. For other types of images, this method may not be applicable.
The explanation in the article is not detailed enough, for example, it does not go into details about how to choose the initial conditions and how to train the model. These details may affect the actual effect of this method.
Other Comments Or Suggestions: N/A
Questions For Authors: Please refer to the weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful feedback and valuable suggestions, which have significantly improved the quality of our paper.
$\color{blue}{\textbf{[Weakness 1]:}}$
**This method requires a lot of computing resources, especially when the feature map is high resolution. Therefore, it may be limited in practical applications.**
While we acknowledge that generating high-resolution feature maps with FINOLA is computationally demanding, our parallel implementation enables its application in practical scenarios. As demonstrated in Figure 4, the entire process—including encoding, generating a 64x64 feature map using FINOLA, and decoding—can be completed in just 2.6 seconds on a MacBook Air equipped with an Apple M2 CPU.
Furthermore, we emphasize that a key contribution of this work lies in offering a novel perspective on understanding images. FINOLA transforms an image into a compressed representation ($\mathbf{q}$) that facilitates a remarkably simple autoregressive process for image reconstruction, relying solely on local relationships that are first-order and linear after normalization.
In essence, the FINOLA encoder learns a compressed representation that effectively discards explicit spatial information while preserving essential information about the image's inherent local relationships. This allows FINOLA to reconstruct the image by exploiting these consistent local structures.
---
$\color{blue}{\textbf{[Weakness 2]:}}$
**This method is only applicable to a specific type of image, that is, images with linear structures. For other types of images, this method may not be applicable.**
Empirically, we have demonstrated the versatility of this method across a wide range of image types. Additional FINOLA reconstructed images, encompassing multiple categories such as natural scenes, human portraits, facial images, animal photographs, air and land transportation, and medical images, are available at https://tinyurl.com/4fdw47xm.
Furthermore, our approach exhibits good performance with scientific images, such as seismic data and tomographic (CT) scans. For instance, in the domain of CT imaging (predicting a CT image from corresponding X-ray projection data), FINOLA not only performs well for both modalities (CT projection data and CT images) but also reveals a notable finding: the corresponding pairs of CT projection data and CT images exhibit a simple linear relationship between their compressed representations ($\mathbf{q}$). Please see more details in our reply to reviewer tmjQ (Question 1).
---
$\color{blue}{\textbf{[Weakness 3]:}}$
**The explanation in the article is not detailed enough, for example, it does not go into details about how to choose the initial conditions and how to train the model. These details may affect the actual effect of this method.**
Thanks for the suggestion. We will add more training details. The initial condition (or compressed vector $\mathbf{q}$) is generated by the encoder. The encoder, FINOLA parameters (matrices $\mathbf{A}$ and $\mathbf{B}$), and decoder are learned end to end by using $L_2$ loss between the reconstructed and original images.
Thank you for pointing out the need for more detailed explanations. Most of details are included in Appendix B.2. We will ensure clear cross-referencing between the main paper and Appendix B.2.
To clarify the points you raised:
- Initial Condition: The initial condition, represented by the compressed vector q, is generated by the encoder network. This network maps the input image to the compressed representation.
- Training: The model is trained end-to-end using backpropagation. The encoder, FINOLA parameters (matrices $\mathbf{A}$ and $\mathbf{B}$), and the decoder network are jointly optimized to minimize the $L_2$ loss between the reconstructed and original images. Additional details are listed in Appendix B.2. | null | null | null | null | null | null |
Janus: Dual-Server Multi-Round Secure Aggregation with Verifiability for Federated Learning | Accept (poster) | Summary: The paper proposes a new secure aggregation protocol for federated learning. This protocol relies on two non-colluding servers: one to aggregate masked gradients and the other aggregating one-time-pad masks. Compared to other similar schemes, the masks here are not secret-shared with a graph of neighbours, thereby enabling client dropouts etc. Also, due to the integration of a new cryptographic primitive (separable homomorphic commitments), the protocol provides security against model inconsistency attacks. The authors also implement and compare their protocol to prior works, confirming training accuracy and performance improvements.
## update after rebuttal
Thanks for participating in the rebuttal process. After re-evaluating the paper while considering the rebuttal, there was unfortunately no increase in the score. While I'm confident that the paper will improve significantly by implementing some of the proposed changes, we can only evaluate what was submitted. In particular, the definition of the separation operation is not sufficiently clear, and the main body of the paper does not demonstrate that Pedersen commitments satisfy the required homomorphism property. Additionally, some arguments provided in the rebuttal are not sound; for example, a simple (hardware-accelerated) hash verification can also be performed in O(l) and should be less costly than using exponentiation-based commitment schemes.
Claims And Evidence: Claims regarding the efficiency and accuracy of the proposed secure aggregation scheme are substantiated with experiments.
Methods And Evaluation Criteria: The presentation of the new cryptographic primitive is not convincing. First of all, some parts of the definition are not clear, e.g., if Commit gives a tuple (c_m, c_r), then Se is a trivial tuple access rather than requiring any computation. Moreover, there is no proof demonstrating that the implemented instantiation (sketched in Appendix B) indeed fulfils the security properties (the security proof in Appendix C only covers the generic construction).
Theoretical Claims: Claims on the security of the proposed instantiation of the SHC scheme are not substantiated with a proof.
Experimental Designs Or Analyses: The experimental design for verifying performance overhead and accuracy seem appropriate, covering asymptotic and empirical results.
Supplementary Material: The paper contains appendices extending the discussion of related work, providing more details on concrete instantiations, and a security analysis. I skimmed through all of them but did not verify the proposed instantiation or the security analysis in detail.
Relation To Broader Scientific Literature: The paper tries to enhance efficiency and accuracy of secure aggregation schemes for federated learning, while also providing robustness against model inconsistency attacks.
Essential References Not Discussed: In the introduction the authors claim that currently most secure aggregation schemes rely on a double masking approach. However, this ignores an entire class of secure aggregation schemes based on MPC / distributed aggregators. Likewise, such works are not discussed in Appendix A. Examples include:
- Fereidooni et al.: "SAFELearn: Secure Aggregation for private FEderated Learning"
- Gehlhar et al.: "SafeFL: MPC-friendly framework for Private and Robust Federated Learning"
- Ben-Itzhak et al.: "ScionFL: Efficient and Robust Secure Quantized Aggregation"
Other Strengths And Weaknesses: Strengths
+ New cryptographic primitive that might be of independent interest
+ Reduced overhead compared to related works (both asymptotically and concretely)
+ Support for client drop outs and security against MIA attacks
Weaknesses
- Definition of new cryptographic primitive not clear
- Only sketch for possible instantiation of new primitive
- No security proofs that the sketched instantiation fulfils all properties
Other Comments Or Suggestions: The paper is fairly well written and mostly easy to follow. The proposed protocol has clear benefits over prior works, combining desirable features with reduced performance overhead. However, there are also several negative aspects as discussed in this review, preventing to recommend acceptance.
With respect to the potential vulnerabilities wrt MIA attacks, it is unclear why other simple solutions like publishing a signed hash of the aggregated model on a widely-visible medium or bullet-in board are not considered for users to verify that they have not been targeted.
The use of the notion "semi-trusted" is not clear. In §1, the authors state that semi-trusted servers could deliberately mishandle some gradients. It should be clarified if "semi-trusted" = "semi-honest" as typically semi-honest servers are assumed to not deviate from the protocol.
Section 4.1 claims that Janus offers enhanced security over prior works. However, it should be noted that if the non-collusion assumption does not hold, there are significantly more severe consequences as the attacker then can single out individual gradients whereas this would still not be possible when relying on secret-shared masks.
Minor/typos:
- Line 221: Pdeersen
Questions For Authors: - Do you have a security proof for your instantiation of the separable homomorphic committment scheme?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Dear Reviewer nXLy,
Thank you for your valuable feedback and suggestions. Below, we address each of your concerns.
**1. Definition of SHC (W1&Q1):**
The output of the Commit algorithm is c, but c is not a simple tuple. In fact, we intend to convey that the complete commitment can be split into c_m and c_r via computation, rather than directly storing. We will revise the wording. The SHC is an abstract blueprint. In existing secure commitment schemes, any scheme that meets the properties we have outlined qualifies as a SHC. In fact, we instantiate SHC as the Pedersen commitment. Furthermore, when discussing the properties of SHC (lines 215-234), we have already discussed that the Pedersen commitment possesses the properties of SHC. Thus, its security is inherently guaranteed by the security of the underlying commitment, eliminating the need for a separate proof.
**2. References:**
Masking-based and MPC-based SA are orthogonal, with the former relying on mask generation and elimination, and the latter on distributed computations. To offer a more comprehensive view of SA, we will include the following excellent MPC-based works, including the references you recommended in the next version.
**MPC-base SA.** Multi-Party Computation (MPC) enables distrustful parties to jointly compute a target function while preserving privacy, which perfectly aligns with the SA. (Mohassel & Zhang, 2017) designed a scheme using secure two-party computation (2PC) and proposed MPC-friendly alternatives to non-linear functions. Prio (Corrigan-Gibbs & Boneh, 2017) employs a novel technique known as SNIPs (Secret-shared Noninteractive Proofs), enabling servers to collaboratively verify a shared proof of correctness with minimal communication overhead. Prio+(Addanki et al., 2022) replaces zero-knowledge proofs with Boolean secret sharing and share conversion protocols, boosting client-side performance. SAFELearn (Fereidooni et al., 2021) employs MPC to enable SA, resisting inference attacks with just two communication rounds while eliminating trusted third parties and supporting client dropouts. (Gehlhar et al., 2023) proposed a MPC-based FL framework that combines SA with poisoning-resistant techniques, achieving privacy and robustness. (Ben-Itzhak et al., 2024) introduced a novel scheme (ScionFL) that efficiently handles quantized inputs while providing robustness against malicious clients and supporting various 1-bit quantization schemes.
**3. Misunderstanding of Instantiation (W2):**
We have already provided a full instantiation in lines 640-714, not just the new primitive. The component primitives can be replaced with any instantiation that satisfies the conditions, enabling compatibility with different systems.
**4. Security of Instantiation (W3):**
We have already proven the security of the generic construction in lines 742-957. The generic construction is an abstract structure, and there are multiple possible implementations. Proving the security of the generic construction ensures the security of all instantiated schemes. Meanwhile, some existing generic constructions [1-3], ensure security in the same way we do. In summary, the instantiation based on the generic construction provide equivalent security guarantees.
[1] Nguyen et al. Multimodal private signatures, CRYPTO22.
[2] Luo et al. Generic construction of trace-and-revoke inner product functional encryption, ESORICS22.
[3] Yuen et al. DualRing: generic construction of ring signatures with efficient instantiations, CRYPTO21.
**5. Other Defense of MIA (S1):**
The method you mentioned defends against MIA by adding an extra operation required in each round, while FL typically requires multiple rounds to converge. In contrast, our Janus naturally provides this defense upon completing aggregation, without any additional steps. Second, verifying the signed hash in every round could impose a significant operational burden on clients. We believe this effectively addresses your concerns.
**6. Synonym and Typos (S2):**
The term 'semi-honest' is more suitable for our situation, and we will correct it.
**7. Misunderstanding of Assumption (S3):**
Compared to schemes that rely on the non-collusion assumption, our scheme enhances security by resisting MIA attacks and enabling verifiable results. Furthermore, it offers greater functionality by supporting dynamic participation and multi-round aggregation. Additionally, our assumption aligns with that of prominent works such as references [1, 2], both of which also rely on the non-collusion assumption, making it a common and reasonable choice in this field.
[1] Ma et al. Flamingo: Multi-round single-server secure aggregation with applications to private federated learning.
[2] Rathee et al. Elsa: Secure aggregation for federated learning with malicious actors.
Thank you for your time and effort. We will update the content in the next version. If you have any further concerns, please feel free to let us know. | Summary: This paper aims to address the challenges in existing secure aggregation (SA) schemes, including scalability with dynamic user participation, vulnerability to model inconsistency attacks (MIA), and the lack of verifiability in server-side aggregation results. The motivation is compelling and beneficial to the advancement of federated learning (FL). The proposed approach introduces a dual-server SA architecture and a new cryptographic primitive, namely Separable Homomorphic Commitment (SHC), which together enable key properties such as scalability, verifiability, and MIA security.
Claims And Evidence: The detailed design sufficiently supports the authors’ claims. First, the dual-server architecture eliminates the need for heavy communication graphs, thereby efficiently addressing the challenge of dynamic user participation. Moreover, the integration of Separable Homomorphic Commitment further mitigates potential attacks from malicious servers, including privacy leakage via MIA and incorrect aggregation behavior.
Methods And Evaluation Criteria: The authors provide both theoretical and experimental evaluations. The theoretical analysis demonstrates the advantages of their proposal over existing SOTA schemes in terms of computation and communication costs. The experimental analysis further validates the feasibility of their approach in the FL setting. Specifically, the authors employ two datasets, MNIST and CIFAR, to assess the impact of their proposal. They evaluate training effectiveness and communication time, comparing their method with both the original FL framework and SOTA SA schemes. The final results demonstrate that the proposed approach is practical and effective in preserving model accuracy while enhancing security.
Theoretical Claims: The threat model and security proofs are reasonable and correct. Specifically, the newly defined threat model within the proposed dual-server architecture is sound. The proofs concerning privacy, single-round security, multi-round security, and resistance to MIA are correct and provide solid support for the authors' claims. Additionally, the authors provide clear examples of collusion resistance.
One possible suggestion is that the authors could briefly include a proof sketch in the main body, even though the detailed proofs are provided in the appendix.
Experimental Designs Or Analyses: The experimental designs are sound and valid. The authors consider both dataset selection and multi-party scenario simulation. The chosen datasets are appropriate for covering diverse scenarios and models, particularly since the proposed approach is a generic construction not limited to a specific dataset or model.
The only concern is that the authors could provide further discussion on the rationale behind setting the user dropout rate at 10%, even though the theoretical analysis has demonstrated that the scheme can tolerate up to n−2 colluding users.
Supplementary Material: I have reviewed most of the supplementary material, particularly the concrete instantiation of Janus and its corresponding proofs. These materials further reinforce the solidity and security of the proposed design.
Relation To Broader Scientific Literature: Like existing literature, this work focuses on achieving secure aggregation (SA) in the FL setting. In addition to maintaining high accuracy and privacy protection, as ensured by existing SA methods, this work further achieves scalability, verifiability, and MIA security. Notably, its key components, including the previously mentioned dual-server architecture and SHC primitive, are lightweight yet capable of effectively guaranteeing the targeted properties.
Essential References Not Discussed: The current related works provide sufficient insight into the recent progress in SA research.
Other Strengths And Weaknesses: Strengths:
- The design of dual-server-based SA protocol is concise yet effective, which successfully addresses the remaining challenges faced by existing SA protocols.
- The proposed SHC is valuable for designing SA protocols in the FL setting and holds independent theoretical interest in commitment research.
- The clearly defined threat model and solid security proofs not only substantiate their claims but also inspire further research on SA in the dual-server setting.
Weakness:
- The authors did not discuss the rationale behind setting the user dropout rate at 10% in Section 4.2.
Other Comments Or Suggestions: 1. A proof sketch could be included in the main body to provide a brief understanding of the security proofs, even for readers who do not consult the appendix.
2. The potential applications of the SHC primitive could be further discussed to broaden its impact.
Questions For Authors: 1. What is the rationale behind setting the user dropout rate at 10% in the experimental section?
2. Are there other potential applications for the newly proposed SHC primitive?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer LvMf,
We sincerely appreciate your valuable feedback. Below we provide a point-by-point response.
**1. Dropout rate (W1&Q1):**
To ensure a fair comparison, it is crucial to recognize that mask-based approaches (including BBSA, VeriFL, and Flamingo in Section 4.2) fundamentally require solving the client dropout to achieve correct aggregation. While secret sharing can address dropouts, its reconstruction overhead grows substantially with higher dropout rates (as it must recover masks for all dropped clients). In contrast, our Janus framework maintains normal aggregation regardless of dropout events, giving it inherent scalability advantages. Existing evaluations of BBSA/Flamingo already consider dropout rates up to 30%, whereas our experiments focus on demonstrating Janus’s superiority under dropout conditions. Even at the modest 10% dropout rate we tested, Janus shows significant performance gains—advantages that would only amplify with increasing dropout rates. Thus, selecting a 10% dropout rate is both methodologically sound (aligning with prior work’s evaluation scope) and sufficient to highlight our core contribution: dropout-robust aggregation. We will explicitly discuss this rationale in our revision.
**2. More Applications (Q2):**
We appreciate this insightful question regarding SHC's broader applicability. It is crucial to note that SHC is fundamentally a general-purpose cryptographic primitive, not limited to the specific applications we discussed. While our paper focuses on its application to SA in federated learning, SHC's architectural flexibility makes it adaptable to any scenario requiring both confidentiality and verifiability, particularly in distributed systems with accountability requirements. Specifically, SHC's cryptographic separation of concerns makes it uniquely suitable for: (1) medical data federation where SHC enables privacy-preserving auditing, (2) dual-server e-voting systems needing verifiable tallying, and (3) secure outsourced computation requiring input/output validation. We will add a discussion in the next version.
Thank you again for your insightful comments. We will integrate these clarifications in the next version. Please don't hesitate to share any further concerns. | Summary: This paper proposes Janus, a secure aggregation scheme based on dual servers for FL, whose core innovation lies in breaking through the communication constraints of the traditional single-server architecture: through the design of a bidirectional interaction protocol that supports multiple rounds of aggregation and verifiable results. Janus takes the lead in realizing the enhancement of users' offline freedom under the collaborative verification mechanism of the two servers. The dual servers each have their own role to constrain each other and securely solve the aggregation problem in FL. For dynamic user scenarios, Janus lifts the strong dependency of users online, so that new users can join or leave without reconfiguring the communication topology, which significantly improves system scalability. To support the architecture, the paper innovatively proposes SHC, which provides a cryptographic foundation for the dual-server paradigm. The experimental part verifies the theoretical analysis through side-by-side comparison of multiple datasets and similar advanced schemes, which improves the security while Jauns has good efficiency. Finally, the security analysis confirms the advantage of the scheme in balancing between efficiency and security.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes, I checked the correctness and proof of security of the scheme in this article and both are correct.
Experimental Designs Or Analyses: Yes, I checked these designs and found no issues.
Supplementary Material: Yes, I reviewed all of the supplementary material.
Relation To Broader Scientific Literature: This paper realizes multi-round secure aggregation, which supports dynamic user updates while verifying the aggregation results, in addition to effectively resisting the model inconsistency attack, which are security issues that have not been simultaneously addressed in previous studies. This paper provides useful ideas for subsequent research.
Essential References Not Discussed: Essential related works for understanding the key contributions of the paper have been appropriately cited and discussed.
Other Strengths And Weaknesses: Strengths:
1. Innovative architectural design. I like the overall solution idea of this paper, where the dual servers each have their own role to constrain each other and securely solve the aggregation problem in federated learning.
2. Efficiency and scalability. Reducing client communication and computation overhead from logarithmic level to constant level significantly improves the efficiency.
3. New technologies and applications. The perfect fit of SHC and dual-server architecture can provide new ideas of privacy assurance for subsequent dual-server systems.
4. Enhanced security. Protecting user gradient privacy while supporting verifiable aggregation results, effectively resisting model inconsistency attacks. The defense analysis of MIA could provide new ideas for such research.
Weaknesses:
1. Insufficient details of dynamic engagement. While dynamic user joining is supported, it is not clearly stated how to handle dynamic updates of user keys (e.g., key rotation mechanism), which may introduce the risk of long-term key compromise.
2. Not reader friendly enough. Although the security assumptions in this paper are to some extent reasonable and feasible, the main paper is not friendly to read, and the paper only portrays the threat model in the appendices. However, I believe that highlighting these in the main paper can eliminate concerns about the security assumptions.
Other Comments Or Suggestions: Suggestion: The potential of combining Janus with differential privacy or homomorphic encryption is not discussed, limiting its application to higher privacy demanding scenarios.
Questions For Authors: 1. Despite the numerous capabilities and benefits of the proposed scheme, what limitations does it currently have?
2. This paper focuses on multi-round secure aggregation. What challenges does existing research face in achieving multi-round secure aggregation?
Ethical Review Concerns: Affirmed.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer uPUn,
Thank you for your valuable comments. We address your concerns as follows.
**1. Dynamic Engagement (W1):**
Our scheme enables dynamic participation where clients can join or leave at any time without compromising security. The process is designed as follows: new clients simply (1) obtain system public parameters, (2) generate their key pairs, and (3) complete one training round with the initial model to receive the global update. Departing clients gracefully exit by ceasing submissions, while key updates follow the same streamlined joining procedure. Importantly, all participant changes occur without requiring reconstruction of the communication graph, maintaining both security and system efficiency.
**2. Writing (W2):**
The current version outlines our security assumptions in the abstract and Lines 88-108, with complete formal specifications provided in Appendix C.1. We will incorporate a more detailed discussion of these assumptions in the next version.
**3. Future Work ( Q1):**
Building on our current work, we will explore methods to resist client poisoning attacks, a common challenge faced by existing masking and encryption-based schemes. Additionally, we will focus on further enhancing both the efficiency and security of our approach.
**4. Challenges of Multi-round (Q2):**
The challenges of multi-round aggregation are already discussed in Lines 42 to 98 of the paper. Specifically, the core challenges regarding this aspect are threefold: (1) most existing schemes require regenerating system parameters for multi-round aggregation, i.e., repeating a single round aggregation many times to realize multiple rounds; (2) practical training scenarios with dynamic participation (users joining/leaving) typically demand complex communication graph reconstruction; and (3) while existing approaches need dedicated, time-consuming operations to achieve both multi-round execution and MIA resistance, our scheme inherently resists MIA during SA without additional overhead.
We greatly appreciate your feedback and will ensure these clarifications will be included in the next version. If you have any further concerns, please let us know. | Summary: Janus proposes a 2-server aggregation protocol in which one of the servers provides the aggregated masked results and the other the aggregated masks and aggregated commitments so that clients can check that the results are consistent. The protocol also prevents the server from ever learning the output to improve the privacy guarantee. The paper goes on to explain the results of some experiments that check that ML still works if the gradient is aggregated using various different means of implementing discrete addition.
Claims And Evidence: All the real justificaiton for the relevant claims is in the supplementary material so I haven't really read it closely. The protocol is fairly simple so it isn't too hard to understand what it does anyway.
Methods And Evaluation Criteria: I don't understand the point of the experiments section. Doesn't Janus just compute the exact (albeit discretized) aggregation of the contributions from clients, just like every other protocol for secure aggregation. If so how could the results of the ML experiments be any different and what is added by providing them.
Theoretical Claims: I didn't check the proofs.
Experimental Designs Or Analyses: The experiments seem pointless so I wouldn't know what to check.
Supplementary Material: I read the functionalities to understand what the protocol was doing and the appendix on MIA.
Relation To Broader Scientific Literature: The key contributions seem pretty minimal, I don't think they contribute to the discussion usefully.
Essential References Not Discussed: There are no missing references that stand out.
Other Strengths And Weaknesses: There are a few problems with this paper.
The assumption that the server might be malicious to the point of running MIA attacks but not be capable of corrupting even a single client in a dynamic client system seems very unrealistic. Thus the value of the server "not finding out the output" seems pretty low.
Aggregation is easy in the two server model. That the inputs can be additively secret shared is decades old folklore and that one of them can be expanded from a key to reduce communication is also decades old folklore.
The secret key aggregation seems wrong, you seem to be assuming that the keys can be aggregated and expanded to give the same thing as if they have been expanded and then aggregated. Something like this is possible with RLWE c.f. ACORN or WILLOW, but you don't mention anything about that.
If you meant to aggregate the entire masks at S_1, this seems to give the server S_1 the opportunity to add any amount to the final aggregation completelynegating the point of the commitments (the only part of the protocol that isn't old folklore). Also this would substantially increase the communication overhead destroying probably all of your communicaiton advantage over other recent protocols.
You claim the protocol is multi-round and dynamic but say nothing about how a new joining client could get access to the up to date model. Obviously as the server isn't suposed to learn this it should be secret and you would need some plan to deal with it.
Other Comments Or Suggestions: In your definition of SHC your c_m is never used as an input to any of the functions, so there is no implicit requirement on it at all. I think I know what you want because I assume you just want Pedersen commitments and to do the obvious thing at each point, but if you are going to specify SHC as a new primitive at least define it precisely. It isn't really a new primitive if the reader has to just guess what you mean from their knowledge of how (the rather old primitive of) Pedersen commitments work.
Questions For Authors: What situation is the no corrupt client assumption reasonable in?
What do you mean by the secret key aggregation?
How do clients joining learn the model?
What are the experiments intended to check, that isn't obvious from the fact you are replacing addition with something that is bit for bit the same under normal conditions?
Why does the functionality you define for your security proofs end with S_1 receiving the output when they are just a support server and hte output seems like it goes to the clients in the protocol? Is it because you don't know how to prove S_1 can't mess up the result that the clients receive? (this somewhat negates the point of the commitments and the reason you can't prove that is impossible is because there is an attack)
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Dear Reviewer e8s4,
We sincerely appreciate your time and effort spent on our work. We have noticed that most of your concerns stem from misunderstandings about our paper. Below, we will first clarify these misunderstandings point by point.
**Misunderstanding 1. Security assumptions (W1&Q1):**
We clarify that we do not assume the server cannot corrupt a single client—rather, it ensures security even if the server corrupts up to n-2 clients. For large-scale FL (n ≫ 1), corrupting n-2 clients is impractical, making this a realistic assumption (lines 88-101 and Section C.1). Additionally, our assumption aligns with the prominent works like references [1] and [2]. Such an assumption represents a widely accepted and theoretically sound approach within this research domain.
[1] Ma et al. Flamingo: Multi-round single-server secure aggregation with applications to private federated learning.
[2] Rathee et al. Elsa: Secure aggregation for federated learning with malicious actors.
**Misunderstanding 2. Unrelated Techniques and Secret Key Aggregation (W2,3&Q2):**
Our work does not involve secret sharing and key expansion as you mentioned. Additionally, our work goes beyond merely implementing aggregation. We have also introduced multi-round, verifiable aggregation that is resistant to MIA, among other enhancements. Finally, secret key aggregation works as follows: clients first encrypt masks using S1's public key, then S1 aggregates, and finally users verify correctness by cross-checking with S0's aggregated values. This mutual verification between S0 and S1 provides the security guarantee without time-consuming RLWE, et al. The details are in Lines 264-317, with formal correctness analysis in Lines 715-740.
**Misunderstanding 3. Security Proof (Q5):**
Our security proof is correct: our scheme (lines 252-317) is consistent with the ideal function (lines 799-812, 846-862), where the server holds the masked aggregated gradients (Lines 288-317), and the client holds the gradient aggregation values. The security proof methodology follows the state-of-the-art scheme Flamingo (SP23). The only identified issue is a minor typo in the function definition (S1 should be S0), which does not introduce the inconsistency you raised. This typo will be corrected in the next version.
Next, we will address your concerns point by point.
**Concern 1. Malicious S_1 (W4):**
Our scheme is already designed to resist the type of attack you mentioned. Specifically, S_0 and S_1 constrain each other, and users can validate the aggregated results of the two servers with the SHC. If either server attempts manipulation, clients detect inconsistencies during validation and abort participation. After theoretical analysis (Table 2), this does not increase our communication overhead.
**Concern 2. New User Join (W5&Q3):**
The process for a new user to obtain the model is simple and straightforward. Specifically, the user only needs to retrieve the aggregation information from S0 and S1, and then, like any other user, they can locally obtain the model parameters. Moreover, the new user can not only acquire the model parameters but also verify whether the parameters provided by the two servers are correct. This is detailed in Lines 304 to 317 of the paper.
**Concern 3. Experiment (Q4):**
Our experiments aim to evaluate the trade-offs introduced by SA (increased training time, potential efficiency overhead) while demonstrating that these costs remain acceptable compared to the security and function benefits. Our experimental design follows the relevant literature such as [1,2] in the same field. Therefore, our experimental design is reasonable and well validates the theoretical analyses in Section 4.1.
[1] Wang et al. VOSA: Verifiable and oblivious secure aggregation for privacy-preserving federated learning.
[2] Ma et al. Flamingo: Multi-round single-server secure aggregation with applications to private federated learning.
We greatly appreciate your feedback and will incorporate these clarifications in the next version. Thank you again, and we look forward to your response. | null | null | null | null | null | null |
Learning Robust Neural Processes with Risk-Averse Stochastic Optimization | Accept (poster) | Summary: This paper proposes a training method for robust neural process based on risk-averse stochastic optimization.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Mostly make sense, except the performance metrics for the experiment part.
Theoretical Claims: No.
Experimental Designs Or Analyses: Yes. I checked the Bayesian optimization part. The issue is that simple regret is used as the performance metric. However, since the paper is focusing on robust neural process, I think it is more appropriate to use robust metric such as the CVaR of simple regret.
Supplementary Material: No.
Relation To Broader Scientific Literature: As far as I know, prior works of NP focus on average performance in prediction, which in many applications of NP, it is necessary to be risk-averse. This paper shows a method to train robust NP.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. The considered problem is original and important for many practical problems.
2. The proposed training method is novel based on distributionally robust optimization.
Weaknesses:
1. I think the experimental part should adopt metrics with more robustness consideration. For example, use CVaR of the simple regret for the Bayesian optimization part.
Other Comments Or Suggestions: 1. Line 115, right column, typo in "fundom"?
2. Perhaps give some reference on the equivalence between Eq. (3) and Eq. (4).
3. Also some reference on the equivalence between Eq. (3) and Eq. (6).
Questions For Authors: 1. Is it possible to give a theoretical characterization on how the robustness of the neural process is improved with the proposed approach?
2. Seems that the robust training process also improves the average performance (e.g., in Fig. 3). Can the authors discuss this?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thanks for your positive and valuable feedback. We have made efforts to address your concerns. If there are any further questions, please let us know and we will reply promptly.
**Q1.** **The paper uses simple regret as the primary performance measure, but a more robust metric like the CVaR of simple regret may be more suitable given the risk-averse theme.**
> **Reply**: We agree that CVaR-based metrics align directly with our emphasis on robustness. We initially chose simple regret for consistency with prior NP-based Bayesian optimization work, which commonly measures average performance.
> In the final version, we plan to include CVaR of simple regret in our Bayesian optimization experiments. Preliminary results suggest that our risk-averse training indeed lowers the worst-case tail of the regret distribution, indicative of stronger robustness.
> We will highlight these findings to reinforce that the proposed method not only reduces average regret but also mitigates high-regret outliers.
**Q2.** “Perhaps give some reference on the equivalence between Eq. (3) and Eq. (4).” “… equivalence between Eq. (3) and Eq. (6).”**
> **Reply**: Thank you for noting this. We do rely on standard transformations from the CVaR’s probability-constrained problem to its slack-variable-based and DRO-based forms. We will add more explicit citations (e.g., Rockafellar et al. (2000) for CVaR expansions and Shapiro et al. (2009) for standard references on DRO formulations) and a short annotated derivation in the supplement clarifying each step.
>
> *[Rockafellar et al. (2000)]: Rockafellar, R. T., Uryasev, S., et al. Optimization of conditional value-at-risk. Journal of risk, 2:21–42, 2000.*
>
> [*Shapiro et al. (2009)]: A. Shapiro, D. Dentcheva, and A. Ruszczy´ nski. Lectures on stochastic programming: modeling and theory. SIAM, 2009.*
**Q3.** **Typographical Error in “fundom” (Line 115, right column)There is a typo: “fundom.”**
> **Reply**: We will correct “fundom” to “random” or “function domain” (whichever was intended in context). Thank you for pointing it out.
**Q4.** **Is there a theoretical way to show how our proposed approach improves the robustness of neural processes?**
> **Reply**: As shown in Equations (3)–(6), our training reweights high-loss (or high-regret) tasks so they carry larger gradients, effectively controlling the model’s tail behavior. This is theoretically grounded in distributionally robust optimization.
> A formal proof of how and by how much tail performance improves remains challenging, especially in a non-convex NP setting. Nonetheless, the link between CVaR and tail distribution improvements is well-known in risk-averse optimization theory (e.g., Rockafellar et al., 2000).
> We will add references and clarifications that highlight how increasing the confidence level α in CVaR training can reduce the worst-case outcomes.
**Q5.** **The robust training process also improves average performance (Figure 3). Please discuss.**
> **Reply**: Indeed, empirically, we observe that focusing on high-risk tasks can prevent overfitting to easy samples and help the model learn more stable, generalizable parameters. That can yield broader improvements in average performance as well.
> A moderate amount of noise or risk weighting seems beneficial for both extremes (the tail and the average) by flattening observed losses. We will stress this empirical phenomenon more in Section 5.
Lastly, thank you once again for your valuable comments. | Summary: This paper investigates the robust neural processes problem from a risk-averse perspective, aiming to control the expected tail risk at a given probabilistic level. The authors formulate the CVaR optimization as a distributionally robust optimization (DRO) problem and propose a double-loop stochastic mirror prox algorithm with variance reduction techniques. The outer loop follows a standard variance reduction process, while the inner loop incorporates momentum to accelerate convergence. Simulation results demonstrate the effectiveness of the proposed algorithm and show that it enhances model robustness.
Claims And Evidence: Yes, the paper’s conclusions are well-supported by the provided evidence.
Methods And Evaluation Criteria: Yes, the simulation results verify the effictiveness of the proposed algorithm.
Theoretical Claims: Yes, I check their Appendix B, see the following "Other Strengths And Weaknesses" part for details.
Experimental Designs Or Analyses: Yes, the experimental results are reasonable.
Supplementary Material: Yes, I review Appendix B, theoretical investigations.
Relation To Broader Scientific Literature: In theory part, i.e., Appendix B, the author cite the results in [Curi et al., 2020]. For Theorem B.4 and B.5, I think the proof techniques used are standard, I will add more comments in the following "Other Strengths And Weaknesses" part.
Essential References Not Discussed: No
Other Strengths And Weaknesses: See the following "Other Strengths And Weaknesses" part.
Other Comments Or Suggestions: ### 1. Main Contribution
This paper is well-written and easy to follow. However, its contribution appears to be incremental. The Conditional Value at Risk (CVaR) optimization problem and the reformulation techniques in Equation (5) have been previously explored in the literature, such as [Curi et al., 2020]. The primary contribution of this work seems to be the Variance-Reduced Stochastic Mirror Prox Algorithm. However, the authors do not provide a theoretical convergence proof, which limits the impact of this contribution. While the experimental results demonstrate the algorithm’s effectiveness, a theoretical analysis would significantly strengthen the paper.
### 2. Algorithmic Considerations
- What is the sample complexity and the minimum number of iterations required to achieve a certain level of accuracy?
- In Algorithm 1, Line 5, how is the maximum number of inner loop iterations, $K_s$, determined?
### 3. Theoretical Questions (Appendix B)
- **Proposition B.1:** The statement "Let $h: \mathcal{X} \to \mathcal{Y}$ be a finite function class $|\mathcal{H}|$" is unclear. Did you mean that $\mathcal{H}$ is a function class with finite VC dimension, denoted as $|\mathcal{H}|$?
- **Theorem B.5 (Proof, Line 728):** Regarding the norm $\|\cdot\|_{\theta a, \star}$, how is the parameter $a$ defined?
- **Line 740 (Second-to-last inequality):** Should $q_m^+$ and $G$ be squared?
- **Equation (27):** Should the left-hand side be enclosed in absolute values or a norm?
- **Equation (28) (Last inequality):** Should $\|\theta^+ - \theta^-\|_{\theta}$ be squared?
### 4. Clarifications and Notation Issues
- **Line 127 (Left Column):** In Section 2.3 (page 3) of Garnelo et al. (2018a), the authors describe training Conditional Neural Processes (CNPs) by predicting the full dataset conditioned on a random subset, without requiring the target set to be disjoint from the context set. However, in Section 3 of this paper, it is stated that $\mathcal{T} \subseteq \{1,2,\dots,N\}$ with $\mathcal{T} \cap \mathcal{C} = \emptyset$ only in CNP. Could you clarify whether the target set in CNP is necessarily disjoint from the context set, or if this is simply an optional design choice?
- **Lines 220–221:** In the expression $\max_{\theta\in \Theta} \psi_{\theta}(\theta) - \min_{\theta\in \Theta} \psi_{\theta}(\theta) \leq D_{\theta}^2$, I suggest using a different variable to distinguish the parameter and argument of $\psi(\cdot)$ for clarity.
### 5. Simulation Results
- **Lines 436–437:** The authors state that the robust solutions shift the original risk distribution to the left in Figure 4. However, in Figure 4(b), RANP still has a portion in the rightmost bar. Could you adjust the probability level to make the shift more apparent?
### 6. Typographical Errors
- **Page 2 (Left Column, Lines 78–79):** "higi-loss" should be corrected to "high-loss".
- **Page 13 (Assumption B.3):** There is an extraneous bracket in the phrase "Smoothness and Lipschitz Continuity".
- **Page 14 (Theorem B.5):** The notation for the Lipschitz constant is inconsistent. The paper states that $\nabla F_{\alpha}$ is $L^\star$-Lipschitz, but it should be $L_{\star}$. Please unify the notation throughout the manuscript.
Questions For Authors: See the "other comments or suggestions" part for details.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thanks for your positive and valuable feedback. We have made efforts to address your concerns. If there are any further questions, please let us know and we will reply promptly.
**Q1.** **The contribution seems incremental.**
> **Reply**: While CVaR optimization is well-established (e.g., in [Curi et al., 2020]), we believe our contribution lies in adapting it specifically to the Neural Process (NP) framework. This is non-trivial because tasks in meta-learning differ greatly in difficulty and distribution, making CVaR-based reweighting particularly relevant to “worst-case adaptation.”
>
> We provide a finite-sum DRO formulation for task-level NP optimization and find that standard SGD-based solutions for CVaR can suffer from exploding/vanishing gradients. The double-loop mirror prox with variance reduction addresses these problems, stabilizing updates while achieving robust generalization.
>
> As with many deep-learning contexts, providing rigorous global convergence guarantees in non-convex scenarios is challenging. We plan to investigate approximate convergence or local convergence guarantees in a future revision, building upon the convex analyses presented in Appendix B.
>
>*[Curi et al., 2020] Curi, S., Levy, K. Y., Jegelka, S., and Krause, A. Adaptive sampling for stochastic risk-averse learning. Advances in Neural Information Processing Systems, 33:1036–1047, 2020.*
**Q2.** **(a) What is the sample complexity and minimum number of iterations required to achieve a certain level of accuracy?(b) In Algorithm 1 (Line 5), how is the maximum number of inner loop iterations Ks determined?**
> **Reply**: We follow standard distributionally robust optimization settings, where each task can be large, and the meta-dataset comprises many tasks. In each iteration, we typically sample at least one data point per task (one shot per task). Detailed formal sample complexity results exist for convex mirror-prox setups; translating them to non-convex NPs remains an open question.
>
> In the paper, $S$ denotes the number of outer loops (epochs), and $K_s$ represents the number of inner-loop iterations within each outer-loop $s$. In experiments, we set $K_s$ as a constant across all outer loops, i.e., $K_s = K$ for all $s$. $K$ is chosen to scale with the average number of samples per task. Empirically, we find that $K_s = 100-500$ often suffice for stable training in NP tasks; deeper loops help slightly but yield diminishing returns relative to the added cost. For outer loop epochs $S$, usually between 50 to 500 epochs. For simple tasks (e.g., low-dimensional regression, small datasets), $S$ is 50–200 epochs. For complex tasks (e.g., image completion), $S$ often requires 200–500+ epochs. We will expand them to also show the effect of varying the number of inner/outer loop iterations in a revised version.
**Q3.** **Theoretical Questions (Appendix B)**
> **Reply**: For Proposition B.1, we will clarify that H refers to a set of finite cardinality or finite VC dimension to avoid ambiguity.
>
> For Theorem B.5, the $a$ in the norm $|\cdot|_{\theta a,*} $ is a typo, it should be $|\cdot|_{\theta,*} $.
>
> For Line 740 (Eq.(26)), the second term should be $2(\sum_{m=1}^M|q_m^+ - q_m^-|G)^2$. For the second term in line 737, we also omitted the square. Thank you again for pointing out the error.
>
> For the left-hand side of Equation (27), adding or omitting the absolute value or norm is acceptable.
>
> For Equation (28), we will correct $||\theta^+ - \theta^-||_{\theta}$ to be squared, i.e., $||\theta^+ - \theta^-||_{\theta}^2$
**Q4.** **Clarifications and Notation Issues**
> **Reply**: Line 127 (Left Column): Garnelo et al. (2018a) assume the target set $T$ is disjoint from the context set $C$ in standard CNP. In our exposition, $T$ could overlap with $C$ or not. The choice is optional and widely used in BNP, DIVNP, TNP, SNP, and we will clarify that it differs slightly from the original CNP assumptions.
> Lines 220–221: In the expression $max_{\theta} \psi_{\theta}(\theta)−min_{\theta} \psi_{\theta}(\theta) ≤ D_{\theta}^2$, we will revise the notation to separate the parameters inside $\psi(\cdot)$ from the outer $\theta$ symbol, ensuring clarity.
**Q5.** **Lines 436–437 suggest robust solutions shift the risk distribution left (Figure 4), but in Figure 4(b), RANP still has a rightmost bar. Could you clarify?**
> **Reply**: RANP does not eliminate all high-risk tasks but significantly reduces their proportion. We will refine the text to specify “we shift the distribution, decreasing the fraction of tasks with high risk” rather than implying complete removal. We can also adjust Figure 4’s bin widths or annotate it more explicitly to highlight the reduction percentage.
**Q6.** **Typographical Errors**
> **Reply**: We have thoroughly revised the spelling errors, typographical mistakes, and symbol ambiguities in the text.
Lastly, thank you once again for your valuable comments. | Summary: This paper introduces a new framework for improving the robustness of Neural Processes. Traditional NPs optimize for average performance across tasks using empirical risk minimization, but this can lead to poor adaptation on difficult or high-risk tasks.
The authors propose a risk-averse optimization strategy based on Conditional Value-at-Risk (CVaR), which shifts focus toward minimizing the loss in the worst-performing fraction of tasks (top % highest-risk tasks).
To make CVaR optimization tractable for NPs, the authors reformulate the objective as a finite-sum minimax problem and solve it using a variance-reduced stochastic mirror prox algorithm. This method uses a double-loop structure, where an outer loop computes stable "snapshot" gradients and an inner loop performs refined updates using stochastic, variance-reduced gradients across tasks.
The authors evaluate their method across multiple domains—image completion (CelebA, EMNIST), Bayesian optimization, and contextual bandits; comparing robust NPs (RNPs) against standard, bootstrapped, and stabilized versions of NPs. Across all benchmarks, RNPs consistently demonstrate improved robustness, particularly under data distribution shifts, model-data mismatch, and adversarial task setups. The method effectively reshapes the task risk distribution, reducing the frequency of high-risk task failures.
Overall, the paper contributes a theoretically grounded, practically effective approach for training Neural Processes to be more reliable under task uncertainty and variability.
Claims And Evidence: Most of the paper's key contributions — particularly the need for CVaR-based optimization, the effectiveness of the proposed algorithm, and performance improvements in multiple domains — are supported with clear and convincing evidence.
However, a few theoretical claims would benefit from stronger empirical or quantitative backing:
- generality: experiments only evaluate a fixed set of standard NP architectures (CNP, ANP, ConvNP, etc.). It’s unclear how well it transfers to significantly different variants or tasks not tested. This is potentially true.
- bias reduction: This is stated theoretically but not quantitatively evaluated in experiments (no analysis of bias vs. variance trade-off or formal bias reduction metrics).
Methods And Evaluation Criteria: Yes.the proposed methods and evaluation criteria are generally well-aligned with the problem of improving robustness in Neural Processes under task variability and risk
Theoretical Claims: The paper does not contain unproven theoretical claims, and where limitations exist (e.g., convergence in non-convex regimes), they are appropriately acknowledged. No significant correctness issues were identified in the stated proofs or derivations.
Experimental Designs Or Analyses: Minor issues:
---
- Hyperparameter sensitivity: no detailed discussion of sensitivity to CVaR level α, learning rates, or inner/outer loop lengths — which could affect robustness.
- Runtime: no analysis of training time, convergence speed, or compute overhead from the double-loop structure, which would help assess trade-offs.
Supplementary Material: The supplementary material is extensive and well-written. It provides a thorough continuation of the main paper's background, offering detailed explanations of the model architecture, training procedures, and experimental setups. It effectively supports and complements the main text.
Relation To Broader Scientific Literature: - NPs and Meta-Learning: the paper extends NPs by addressing a critical limitation — their reliance on empirical risk minimization, which can lead to poor performance on outlier tasks. Unlike existing enhancements (e.g., bootstrapped or stabilized NPs), the paper introduces explicit risk-sensitive objectives.
- Risk-averse and robust optimization: the paper is the first to apply CVaR-based risk minimization in NPs, treating task-level adaptation risk as a distribution and focusing on controlling the worst-performing fraction.
- mirror prox and variance reduction: the paper Innovatively adapts these optimization methods to the meta-learning risk-averse process.
Essential References Not Discussed: Citations are well written.
Other Strengths And Weaknesses: Strengths:
---
- Originality: While the core components (CVaR, Mirror Prox, variance reduction) are established individually, their integration into the Neural Process framework is novel and well-motivated.
- Significance: The proposed method addresses a real and underexplored challenge; robustness to high-risk task failure in few-shot and meta-learning settings. Given the increasing deployment of meta-learning models in uncertain environments (e.g recommendation or robotics), this contribution is both timely and impactful.
- Clarity of technical sections: The paper does a solid job presenting technically dense ideas (e.g., minimax reformulation, Bregman setups) in a logically structured way. The supplementary material is especially helpful, providing detailed derivations and implementation notes.
- Empirical breadth: The experiments span generative modeling, black-box optimization, and decision-making ; showcasing the generality of the approach. The consistent performance improvements across these settings strengthen the empirical case.
Weaknesses:
---
- Clarity for broader audiences: While technically sound, some sections (e.g., optimization formulation and variance reduction) are dense and could benefit from more intuitive explanations or diagrams in the main text.
- Ablation/Interpretability: There is limited ablation on the effect of CVaR level α, inner/outer loop iterations, or mirror prox updates. Understanding the sensitivity of the model to these choices would clarify its practical utility.
- Efficiency analysis missing: The method introduces nontrivial computational overhead (double-loop structure, per-task sampling), but no runtime or convergence speed comparisons are provided. This could be critical for real-world deployment.
- Limited theoretical guarantees: While the optimization method is principled, the paper lacks convergence guarantees in non-convex regimes — a common but notable limitation that could be acknowledged more explicitly.
Overall:
---
The paper presents a significant and novel contribution by adapting risk-averse learning to NPs, backed by thoughtful algorithm design and broad empirical validation.
Other Comments Or Suggestions: typos
---
section 3.1: fundom -> random
Questions For Authors: Already addressed in the weaknesses section:
- How sensitive is the performance to the CVaR level α?
- What is the computational overhead of the double-loop optimization compared to standard ERM-trained NPs?
Also,
- It’s not fully explained why improving tail performance doesn't degrade average-case accuracy?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your positive and valuable feedback. We have made efforts to address your concerns. If there are any further questions, please let us know, and we will reply promptly.
**Q1.** **Ablation experiments on CVaR α, inner/outer loop iteration lengths, and learning rate.**
> **Reply**: We appreciate the need for more thorough ablations. Due to space limits, we included a subset of these in Appendix E.3, focusing on α. Empirically, we have observed that α values in a moderate range (e.g., 0.4–0.6) often strike a good balance between focusing on worst-case tasks and not overly biasing the model toward rare outliers. We show an ablation (Figure 8) demonstrating that too small an α (e.g., 0.1) under-treats high-risk tasks, while an extreme α (e.g., >0.8) can degrade average performance. These observations align intuitively with the idea that CVaR reweights the “tail” portion of tasks.
>
> In the paper, $S$ denotes the number of outer loops (epochs), and $K_s$ represents the number of inner-loop iterations within each outer-loop $s$. In experiments, we set $K_s$ as a constant across all outer loops, i.e., $K_s = K$ for all $s$. $K$ is chosen to scale with the average number of samples per task. Empirically, we find that $K_s = 100-500$ often suffice for stable training in NP tasks; deeper loops help slightly but yield diminishing returns relative to the added cost. For outer loop epochs $S$, usually between 50 to 500 epochs. For simple tasks (e.g., low-dimensional regression, small datasets), $S$ is 50–200 epochs. For complex tasks (e.g., image completion), $S$ often requires 200–500+ epochs. We will expand them to also show the effect of varying the number of inner/outer loop iterations in a revised version.
>
>We used Adam optimizer with an initial learning rate $5·10^{-4}$ and decayed the learning rate using a cosine annealing scheme.
**Q2.** **How does the double-loop optimization compare to simpler baselines in computational overhead?**
> **Reply**: The additional overhead mainly comes from (i) maintaining snapshots of gradients and (ii) sampling from each task in each inner iteration. However, because we adopt a task-aware sampling scheme (i.e., one sample per task), the minibatch size remains comparable to that of standard NP training.
> In practice, we find the extra overhead to be modest—roughly 1.1–1.3× the per-epoch training time compared to standard NPs, depending on the number of tasks and the length of inner loops.
> We will add more quantitative details in the final version, including a time-per-epoch comparison and possible strategies (e.g., smaller inner loops) to reduce total training time.
**Q3.** **Why does improving tail performance not degrade average accuracy?**
> **Reply**: Elevated attention to difficult tasks prevents overfitting to “easy” samples and encourages more robust parameter updates. This can reduce overall variance in predictions and in practice, often benefits average-case accuracy. Our experiments (Tables 1, 2, 3) demonstrate that while we significantly reduce high-risk failure rates, the average performance (e.g., log-likelihood) either remains on par or improves slightly compared to standard ERM-based NPs. CVaR reweights tasks in proportion to their risk. Importantly, for moderately chosen α, we are not only training on the worst tasks but rather adjusting the objective so that the upper tail influences the gradient more strongly. The result is a more balanced training signal across all tasks.
**Q4.** **The paper largely focuses on convex analysis, but deep NPs are inherently non-convex.**
> **Reply**: We agree that in non-convex regimes, strong theoretical convergence guarantees are elusive (an issue for many deep learning algorithms). While mirror-prox methods do enjoy strong convergence rates for convex-concave saddle-point problems, we currently rely on their practical effectiveness for non-convex NPs.
> We have followed prior work in robust optimization, which often leverages the same theoretical tools in non-convex settings with the understanding that local minima or stable solutions can often suffice in practice. In the final version, we will highlight this limitation more explicitly.
Lastly, thank you once again for your valuable comments. | null | null | null | null | null | null | null | null |
Efficient Parallel Training Methods for Spiking Neural Networks with Constant Time Complexity | Accept (poster) | Summary: This paper introduces a novel Fixed-point Parallel Training (FPT) method to accelerate Spiking Neural Networks (SNNs) training.
The method is theoretically analyzed for convergence, and the authors show that existing parallel spiking neurons are special cases of this approach.
Experimental results demonstrate that FPT simulates the dynamics of original LIF neurons effectively, significantly reducing computational time without sacrificing accuracy
Claims And Evidence: Yes. The claim in this paper is clear.
Methods And Evaluation Criteria: Yes. The method is novel and theoretically complete.
Theoretical Claims: Yes. The proofs about fixed points are correct.
Experimental Designs Or Analyses: Yes. Experiments in this paper is extensive and sound.
Supplementary Material: The authors did not provide supplementary material.
Relation To Broader Scientific Literature: This paper is an extension of previous work on parallel training SNNs.
I see this paper as a milestone because this method can be used to pre-train a SNN parallelly.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. correct and robust proofs;
2. well-written and clear motivations;
3. extensive and convincing experiments.
Weaknesses:
No obvious weaknesses.
Other Comments Or Suggestions: 1. The usage of "\cite" and "\citet" in latex should be careful. There are several misuses of "\cite" in section 2.3 and section 4.3.1
2. The authors should consider the running time comparison (or operation counts) of the training process and inference process between your method and traditional surrogate methods.
Questions For Authors: 1. I wonder whether your method can be extended to pre-training SNNs parallelly.
There have been several works like SpikeBERT, SpikeGPT, and SpikeLM which pre-trained LIF-based spiking Transformers non-parallelly.
Please discuss the possibility of your method on pre-training SNNs.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank you for your high evaluation of our work, seeing this paper as a milestone due to its potential to enable parallel pre-training of SNNs. We also appreciate your valuable and detailed feedback. Below, we will answer your questions.
**Q1: The usage of "\cite" and "\citet"**
**A1:** We have carefully re-checked all citations throughout the manuscript. However, since ICML does not allow manuscript modifications during the rebuttal period, we will correct any remaining formatting issues (e.g., incorrect use of `\cite` vs `\citet`) in the camera-ready version if the paper is accepted.
**Q2: Running time comparison of the training process and inference process**
**A2:** Our proposed algorithm supports parallel training, including parallel forward and backward passes. We achieve speedups in both passes. To demonstrate this, we compare the training and inference complexity of BPTT and FPT at different timesteps ($T$). The experiments are conducted on the MNIST dataset using a 3-layer MLP (784×256×128×10), batch size = 256, learning rate = 0.001, and 80 training epochs.
|Method|T|Training Time (s)|Inference Time (s)|Accuracy (%)|
|-|-|-|-|-|
|BPTT|8|0.0195|0.0042|$97.75\pm0.16$|
||64|0.0835|0.0257|$97.67\pm0.17$|
||512|1.701|0.2092|$97.84\pm0.12$|
|FPT|8|0.0096|0.0021|$97.73\pm0.23$|
||64|0.0109|0.0021|$97.71\pm0.11$|
||512|0.0803|0.0021|$97.70\pm0.22$|
Here, "time" refers to the average time required to train or infer a single batch on a single A100 GPU. As shown in the table, both the training and inference time of BPTT increase with the number of timesteps $T$. In contrast, the training time of FPT increases only slightly, and its inference time remains almost the same regardless of $T$. Notably, at $T = 512$, FPT is 21 times faster to train compared to BPTT.
**Q3: Extended to pre-training SNNs parallelly**
**A3:** A key advantage of our algorithm is its flexibility - it does not impose restrictions on specific network architectures and can be applied to a wide range of SNN models. By replacing the time-consuming LIF neuron sequential computation with our proposed FPT-based parallel iterations, both the forward and backward passes during training can be significantly accelerated.
Importantly, since FPT preserves the original neuron dynamics, the pre-trained model can still be deployed and inferred using standard SNN sequential processing. This makes FPT not only suitable for accelerating training, but also highly compatible with existing SNN hardware deployments. Therefore, FPT provides a practical and general solution for efficient parallel pre-training of SNNs without sacrificing biological fidelity or inference compatibility.
We again sincerely thank you for the thoughtful and encouraging comments. We hope that our response will clarify your concerns and further demonstrate the practicality and generality of FPT.
---
Rebuttal Comment 1.1:
Comment: I confirm my score. Good luck. | Summary: This work introduces Fixed-point Parallel Training (FPT), a novel method that reduces SNN training time complexity from O(T) to O(K) (where K is a small constant, typically K = 3) by enabling efficient parallel processing across all timesteps without modifying the network architecture. A theoretical convergence analysis proves the stability of FPT and demonstrates that existing parallel spiking neuron models are special cases of this framework. Importantly, FPT preserves LIF neuron dynamics, including membrane potential updates and reset mechanisms, ensuring both biological interpretability and computational efficiency. By decoupling sequential dependencies, FPT significantly accelerates training while maintaining or even improving accuracy, making it a scalable and efficient solution for large-scale, long-duration SNN tasks on modern hardware. This advancement enhances the feasibility of deploying SNNs in real-world applications, particularly in neuromorphic computing and time-sensitive spatiotemporal processing tasks.
Claims And Evidence: The claims are supported by clear evidence.
Methods And Evaluation Criteria: The "LIF Dynamics Simulation" part only include a single LIF neuron. However, the propagation of network dynamics might be quite different since the effect could accumulate. How does the network dynamics be affacted across long T.
How does the method perform on ImageNet-1K.
Theoretical Claims: The proofs are correct.
Experimental Designs Or Analyses: See "Methods And Evaluation Criteria"
Supplementary Material: Part A
Relation To Broader Scientific Literature: Yes
Essential References Not Discussed: No
Other Strengths And Weaknesses: See "Methods And Evaluation Criteria"
Other Comments Or Suggestions: No
Questions For Authors: See "Methods And Evaluation Criteria"
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank you for your encouraging feedback and for recognizing that FPT significantly accelerates SNN training, making it a scalable and efficient solution for large-scale, long-duration tasks. We also appreciate your recognition of its potential in practical applications, especially neuromorphic computing and time-sensitive spatiotemporal processing. Below, we provide detailed responses to your questions and concerns.
**Q1: The network dynamics affacted across long T**
**A1:** We replaced the LIF neurons in a pre-trained 3-layer LIF-based MLP (784×256×128×10) on the MNIST dataset with parallel LIF neurons based on FPT. The cosine similarity (%) between the original and FPT-replaced outputs for $T$=8, 64, and 512 is shown in the table below:
||T = 8|T = 64|T = 512|
|-|-|-|-|
|$\alpha=5$|99.89|99.55|99.53|
|$\alpha=7$|99.90|99.49|99.47|
As $T$ increases, the outputs of the FPT-based parallel LIF and the original LIF become less consistent due to error accumulation, resulting in a slight decrease in the similarity of the final network output. However, even for $T$=512, the similarity remains around 99.5%, indicating that FPT maintains a high degree of consistency in the network dynamics. Moreover, this minor difference can be addressed by light fine-tuning.
**Q2: Experiments on ImageNet1K**
**A2:** The proposed FPT mainly focuses on improving the speed for long $T$. Thus, in the submission, we reported results on dynamic datasets such as DVS-CIFAR10 ($T$=10) and DVS-Gesture ($T$=20), static datasets including ImageNet-100 ($T$=4), and graph datasets like Amazon Photos ($T$=8, 32, 128), as well as sequential datasets such as Sequential CIFAR10 ($T$=32) and Sequential CIFAR100 ($T$=32). In contrast, static datasets like ImageNet usually use smaller timesteps (e.g., $T$=5 or 6).
In addition, due to the high computational cost of training SNNs on ImageNet-1K, many previous studies (e.g., SSNN and LocalZO) have adopted alternative benchmarks such as Tiny-ImageNet or ImageNet-100. For this reason, we also chose ImageNet-100 as the representative benchmark in the main results. We are currently actively conducting experiments on ImageNet-1K, using the same experimental settings as MS-ResNet-18 [1]. Due to time constraints, we report the preliminary results we have obtained so far below.
||BPTT (82 epoch, our running, lastest)|FPT (82 epoch, our running)|FPT (97 epoch, lastest)|
|-|-|-|-|
|Accuarcy (%)|52.27|53.57|57.55|
|Training loss|2.21|2.18|1.98|
|Per-batch time (s)|8.48|5.76|5.76|
The training loss refers to the cross-entropy loss of the last batch in the given epoch. As seen, with the same hyperparameters and number of epochs, FPT achieves higher accuracy and lower loss. Additionally, even with $T$=6, FPT achieves approximately 1.5× faster training speed compared to BPTT on the same 4 3090 GPUs. It is important to note that, due to time and computational limitations, we have not yet fine-tuned the hyperparameters for FPT. Further adjustments may lead to even better performance.
[1] Advancing Spiking Neural Networks Toward Deep Residual Learning. TNNLS 2024
We sincerely hope that our clarifications have addressed your concerns and will help improve your opinion of our work. | Summary: The paper proposes a new training method for SNNs called Fixed-point Parallel Training (FPT), which aims to improve efficiency by reducing time complexity from O(T) to O(K), where K is a small constant. The method leverages a fixed-point iteration framework to enable parallel computation across timesteps rather than processing them sequentially. The authors argue that this approach preserves key neural dynamics, including the reset mechanism of Leaky LIF neurons, while accelerating training significantly. Theoretical analysis is provided to prove the convergence of FPT, and the authors demonstrate that some existing parallel spiking neuron models can be viewed as special cases of this method. Experiments show that FPT achieves competitive accuracy while significantly reducing computational costs compared to traditional BPTT and other existing parallel spiking neuron training methods.
## update after rebuttal
Thanks to the authors for the detailed and thoughtful rebuttal. I appreciate the additional clarifications on memory usage, the effect of surrogate gradients on convergence, and the broader applicability of FPT. It’s clear a lot of effort went into addressing the points I raised.
That said, my overall assessment remains the same. While the new details help, the broader concerns — like the limited experimental scope, lack of deeper analysis on the practical trade-offs, and the fairly narrow range of tested applications — are still there. I think the paper presents a promising idea and the results are solid within the datasets tested, but it doesn't quite reach the level of novelty and thorough validation I’d expect for acceptance. So I’m keeping my original score of Weak Reject.
Claims And Evidence: The main claim is that FPT allows parallel training of SNNs without modifying the network architecture while maintaining biological interpretability. The authors provide both theoretical and empirical support for this claim. The convergence proof of the fixed-point iteration is a positive contribution, though it relies on certain assumptions about the Lipschitz continuity of the surrogate function. The empirical results show that FPT achieves a significant speedup over BPTT while maintaining comparable accuracy across multiple datasets. However, the experiments lack detailed comparisons with alternative approaches such as event-based SNN training techniques or alternative parallelization strategies like recurrent SNNs with gating mechanisms. The claim that FPT generalizes well to different datasets is only partially supported since the paper primarily focuses on neuromorphic datasets like DVS-CIFAR10 and DVS-Gesture but does not test on more complex real-world applications, such as continuous control or large-scale spiking datasets.
One aspect that is not thoroughly examined is the potential trade-off between computational speed and memory usage. Since FPT processes all timesteps in parallel, it likely requires more memory than sequential training methods. The authors acknowledge this in the discussion but do not provide a quantitative breakdown of memory overhead. The claim that FPT maintains all critical neural dynamics is also somewhat oversimplified because the backward process does not exactly correspond to traditional BPTT. The experimental results support the main claims to some extent, but the paper would benefit from deeper analysis of efficiency trade-offs and broader comparisons to alternative SNN training methods.
Methods And Evaluation Criteria: The authors evaluate FPT on standard neuromorphic datasets such as DVS-CIFAR10, DVS-Gesture, and ImageNet-100. These datasets are commonly used for benchmarking SNNs, so the choice is reasonable. The experiments compare FPT to existing training methods such as timestep shrinkage, online training, and stabilized spiking flow, providing a fair assessment of performance gains. However, the evaluation criteria focus almost entirely on accuracy and training speed, without considering other important factors like memory usage, energy efficiency, or sensitivity to hyperparameters. Since one of the key motivations of SNNs is energy efficiency, a discussion of power consumption during training would be valuable.
The authors also conduct an ablation study to examine the impact of the number of iterations (K) and the role of the reset mechanism. This is a strong aspect of the evaluation, as it provides insight into the effectiveness of the method. However, a more detailed breakdown of how K affects convergence speed and accuracy would make the evaluation even stronger. The experimental design is generally sound but could be improved by including a broader set of tasks, particularly those requiring long-term dependencies, to further validate the generalization ability of FPT.
Theoretical Claims: The paper provides a theoretical proof that FPT converges to a fixed point under certain conditions. This is an important contribution, as many SNN training methods lack formal convergence guarantees. The proof is based on contraction mapping and Lipschitz continuity arguments, which are reasonable assumptions for neural networks with smooth surrogate gradients. However, the proof does not establish how the convergence rate of FPT compares to BPTT or whether it guarantees optimal weight updates in all cases. The surrogate gradient approach used in the backward pass introduces an additional layer of approximation, and the impact of this approximation on convergence is not fully analyzed. The theoretical claims are generally well-supported but would benefit from additional discussion on potential failure modes, such as scenarios where the fixed-point iteration might converge too slowly or to suboptimal solutions.
Experimental Designs Or Analyses: The experimental setup is well-structured, with comparisons across multiple datasets and an ablation study to analyze key design choices. The main advantage demonstrated is the speedup of training, particularly for long-duration simulations where traditional BPTT is inefficient. However, there are some weaknesses in the analysis. The paper does not include a detailed breakdown of computational cost beyond training time, such as memory usage and GPU utilization. While FPT reduces the number of sequential operations, it likely increases parallel memory requirements, and this trade-off is not analyzed in depth.
Another issue is the limited scope of dataset selection. Most of the experiments focus on neuromorphic benchmarks, which are useful for validating the method but do not fully demonstrate its applicability to broader machine learning tasks. It would be beneficial to see results on tasks like speech recognition, reinforcement learning, or other domains where SNNs are increasingly being applied. Finally, the statistical significance of the results is not clearly reported. While standard deviations are included, there is no discussion of whether the differences in accuracy are statistically significant across multiple runs.
Supplementary Material: The supplementary material includes additional experimental details and theoretical derivations, which help clarify the paper’s contributions. The appendix provides training hyperparameters and additional results, but it lacks an in-depth discussion of implementation details. There is no code provided for reproducibility, which makes it difficult for others to verify the claims. The theoretical proofs are useful, but additional numerical experiments demonstrating different convergence behaviors would strengthen the argument.
Relation To Broader Scientific Literature: The paper is well-grounded in the existing literature on SNN training methods and parallel computing in neural networks. It references prior work on timestep reduction, online training, and surrogate gradient methods, positioning FPT as an improvement over these approaches. However, the discussion is somewhat limited to SNN-specific methods and does not draw connections to broader machine learning literature. There are similarities between FPT and parallelization techniques used in recurrent neural networks (RNNs) and deep equilibrium models, but these connections are not explored in depth. It would be useful to compare FPT to other parallel training techniques used in non-spiking networks to highlight its broader relevance.
Essential References Not Discussed: The paper discusses prior work on parallel SNN training and timestep reduction but does not cite some relevant research on alternative acceleration techniques, such as event-driven training methods or hybrid ANN-SNN approaches. There is also limited discussion of how FPT compares to neuromorphic hardware implementations, which are a key motivation for efficient SNN training. Some recent work on efficient surrogate gradient methods and biologically inspired training rules could also be relevant for positioning FPT within the broader field.
Other Strengths And Weaknesses: The main strength of the paper is its focus on improving the efficiency of SNN training without modifying the underlying network architecture. The use of fixed-point iteration for parallel training is a novel and well-motivated approach. The experimental results demonstrate clear speed improvements over BPTT while maintaining accuracy. However, the paper has several weaknesses. The evaluation lacks a detailed analysis of computational trade-offs, particularly regarding memory usage and energy efficiency. The theoretical analysis, while useful, does not fully address the impact of surrogate gradients on convergence. The experiments are mostly limited to neuromorphic datasets, which may not fully demonstrate the generalization ability of FPT.
Other Comments Or Suggestions: The authors should include a discussion on memory efficiency and computational cost beyond training time. They should also test FPT on a broader range of tasks to demonstrate its applicability. Providing open-source code would improve reproducibility.
Questions For Authors: 1. How does FPT compare in memory consumption to standard BPTT?
2. How does the surrogate gradient approximation affect convergence guarantees?
3. Can FPT be applied to reinforcement learning tasks or other non-spiking domains?
4. What are the potential failure cases where FPT might not converge effectively?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely thank you for recognizing that FPT enhances the efficiency of SNN training without modifying the underlying network architecture and for acknowledging FPT as a novel and well-motivated approach. Below, we provide detailed responses to your questions.
**Q1: How does FPT compare in memory consumption to standard BPTT?**
**A1:** To compare the memory consumption of BPTT and FPT, we conducted experiments on the MNIST dataset using a 3-layer MLP (784×256×128×10), a batch size of 256, a learning rate of 0.001, and 80 epochs. The table below shows the comparison at different $T$:
|Method|T|Training Time (s)|Inference Time (s)|Memory (MB)|Accuracy (%)|
|-|-|-|-|-|-|
|BPTT|8|0.0195|0.0042|600|$97.75\pm0.16$|
||64|0.0835|0.0257|956|$97.67\pm0.17$|
||512|1.701|0.2092|3238|$97.84\pm0.12$|
|FPT|8|0.0096|0.0021|622|$97.73\pm0.23$|
||64|0.0109|0.0021|1162|$97.71\pm0.11$|
||512|0.0803|0.0021|4264|$97.70\pm0.22$|
Here, "Time" refers to the average running time for one batch during training or inference on a single A100 GPU. At $T=512$, FPT is 21x faster than BPTT during training.
FPT only slightly increases the memory usage for the LIF during training without affecting other components of the network.
Thus, the overall memory consumption is not significantly higher than BPTT. For instance, in larger networks such as MS-ResNet-18 trained on ImageNet1K, BPTT consumes 22.54 GB on 4 3090 GPUs, while FPT consumes 23.52 GB, resulting in only a 4% increase in memory usage.
**Q2: How does the surrogate gradient approximation affect convergence guarantees?**
**A2:** The surrogate gradient approximation does not affect the convergence guarantees because these guarantees are based on the forward pass of the network, while the surrogate gradient approximation is applied on the backward pass. In addition, the surrogate gradient approximation is necessary for backpropagation in SNNs due to the non-differentiability of the spike activation function. The effectiveness of surrogate gradient approximation has been well verified through experiments on various datasets.
**Q3: Can FPT be applied to reinforcement learning tasks or other non-spiking domains?**
**A3:** FPT can indeed be applied to reinforcement learning tasks as long as the task uses LIF-based SNNs for temporal processing. Our approach is particularly suitable for addressing the high latency problem of sequence processing using LIF neurons. However, for other non-spiking domains that do not rely on SNNs, this is beyond the scope of our current work.
**Q4: What are the potential failure cases where FPT might not converge effectively?**
**A4:** In the submission,we have discussed this issue in Appendix F "Discussion and Limitation". FPT works well for LIF-based SNNs with temporal decay, but may not be effective for simple Integrate-and-Fire models that lack such mechanisms. However, mainstream neuron models typically incorporate a decay factor or gating mechanism, which ensures that the influence of past inputs diminishes over time.
**Other Questions**
- *The claim that FPT maintains all critical neural dynamics is also somewhat oversimplified because the backward process does not exactly correspond to traditional BPTT.*
Neural dynamics, including leakage, integration, firing, and reset, are part of the neuron's forward process. Neither the backward process of BPTT nor FPT belongs to the neural dynamics process; they are solely used to optimize network weights.
- *Since one of the key motivations of SNNs is energy efficiency, a discussion of power consumption during training would be valuable.*
The energy efficiency of SNNs typically refers to inference on neuromorphic hardware. Both BPTT and FPT rely on floating-point operations on GPUs during training, which does not reflect deployment energy costs. FPT does not affect sequential inference behavior of SNNs. As shown in Table 2 of the submission and in additional experiments presented in **Review hARo (A1)**, its firing rate is comparable to or even lower than BPTT, ensuring that SNNs trained with FPT remain energy efficient during deployment.
- *The paper primarily focuses on neuromorphic datasets like DVS-CIFAR10 and DVS-Gesture*
In the submission, we reported results on dynamic datasets such as DVS-CIFAR10 and DVS-Gesture, static datasets including ImageNet-100, and graph datasets like Amazon Photos, as well as sequential datasets such as Sequential CIFAR10 and Sequential CIFAR100.
- *Additional numerical experiments demonstrating different convergence behaviors would strengthen the argument.*
In the submission, Section 6.1 "LIF Dynamics Simulation" already discusses the convergence behavior.
- *Providing open-source code would improve reproducibility.*
The code will be publicly available on GitHub after the publication of this work.
We sincerely hope that our clarifications above have addressed your concerns and that our responses contribute positively to your understanding of our work. | Summary: This paper proposes Fixed-point Parallel Training for efficient training of SNN, which does not change the network architectures. This training mode does not affect the dynamics of LIF neurons and achieves better performance on data with time series information such as DVS.
Claims And Evidence: The three contributions of this paper are:
- It "proposes a novel Fixed-point Parallel Training (FPT) method that reduces the training time complexity of SNNs";
- it "proves the convergence of FPT and demonstrates that existing parallel spiking neuron models can be derived as special cases of our framework";
- it "retains the dynamic properties of original LIF neurons, achieves better performance, and significantly reduces computational time".
I agree with the first two contributions, but I am skeptical about the last one.
- This paper does not show in detail the computation, time, and space complexity of using FPT to train SNN. Please refer to T-RevSNN [1] for detailed experimental data on various complexities during training.
- FPT is mainly experimented on time series data, lacking experiments on static data. Please add experiments on ImageNet1k to prove that FPT is still reliable on static data tasks in the rebuttal.
[1] High-Performance Temporal Reversible Spiking Neural Networks with O(L) Training Memory and O(1) Inference Cost. ICML 2024.
Methods And Evaluation Criteria: This paper lacks experiments on large static datasets, such as ImageNet1k. Also, the DVS data selected in this article is also relatively small. If possible, please consider supplementing the experiments on HAR-DVS [1] in the rebuttal.
[1] HARDVS: Revisiting Human Activity Recognition with Dynamic Vision Sensors. AAAI 2024.
Theoretical Claims: The proof in Appendix A is correct in my view.
Experimental Designs Or Analyses: Please see `Methods And Evaluation Criteria`.
Supplementary Material: I mainly focus on the proofs in Appendix A and the experimental setup in Appendix C. Notice that this paper used TET as the loss function. I wonder if the methods compared in the main text all use TET as the loss function. If not, there may be an unfair experimental comparison.
Relation To Broader Scientific Literature: This paper focuses on addressing parallel training methods for SNNs and does not seem to make an obvious contribution to the broader scientific literature.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: There is no description of the `Time` in Table 2 and Figure 3, which may confuse me.
Questions For Authors: Please see `Claims And Evidence`, `Methods And Evaluation Criteria`, and `Supplementary Material`.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for recognizing the novelty and contributions of our proposed FPT framework, including its ability to reduce the training complexity of SNNs and the fact that existing parallel spiking neuron models can be viewed as special cases of FPT. We also appreciate the constructive and insightful feedback. Below, we answer the reviewers' questions in detail.
**Q1: Various complexities of FPT during training refer to T-RevSNN**
**A1**: Thank you for your valuable suggestion to further demonstrate the various complexities of FPT. The table below compares the theoretical training and inference complexity of different algorithms:
|Methods|Training||Inference|Applicable scope|
|-|-|-|-|-|
||Memory|Time|Energy||
|OTTT|$O(L)$| $O(LT)$|$O(T)$|Limited|
|SLTT-k|$O(L)$|$O(Lk)$|$O(T)$|Limited|
|T-RevSNN turn-off|$O(L)$|$O(L)$|$O(1)$|Limited|
|T-RevSNN turn-on|$O(L)$|$O(T)$|$O(1)$|Limited|
|BPTT|$O(LT)$|$O(LT)$|$O(T)$|Unlimited|
|FPT|$O(LT)+\lambda O(LKT)$|$O(LK)$|$O(T)$|Unlimited|
Here, $L$ is the number of network layers, $T$ is the timestep, and $K$ is the number of iterations in FPT, which is typically 3 and does not increase with $T$. Thus, for long $T$, $K$ can be approximated to be negligible. The space complexity $O(LKT)$ and time complexity $O(LK)$ can be approximated as $O(LT)$ and $O(L)$, respectively. The coefficient $\lambda$ represents the proportion of memory attributed to LIF components, which is the only part FPT increases—other parts of the network remain unaffected.
It is worth noting that methods such as OTTT, SLTT, and T-RevSNN truncate gradients or discard most of the temporal connections, which can limit their applicability to tasks requiring fine-grained temporal dynamics. In contrast, FPT accelerates SNN training without modifying the network and retaining the original neuron dynamics, so it is applicable to a wider range of SNN models.
Due to time constraints and the limitations of OTTT, SLTT, and T-RevSNN in capturing temporal features, our experiments mainly focus on comparing the complexity of BPTT and FPT at different timesteps $T$. The experiments are conducted on the MNIST dataset using a 3-layer MLP (784×256×128×10), a batch size of 256, a learning rate of 0.001, and 80 epochs.
|Method|T|Training Time (s)|Inference Time (s)|Firing rate (%, layer 1)|Firing rate (%, layer 2)|Memory (MB)|Accuracy (%)|
|-|-|-|-|-|-|-|-|
|BPTT|8|0.0195|0.0042|44.47|47.03|600|$97.75\pm0.16$|
||64|0.0835|0.0257|45.1|46.43|956|$97.67\pm0.17$|
||512|1.701|0.2092|46.55|47.76|3238|$97.84\pm0.12$|
|FPT|8|0.0096|0.0021|40.83|44.29|622|$97.73\pm0.23$|
||64|0.0109|0.0021|38.24|40.36|1162|$97.71\pm0.11$|
||512|0.0803|0.0021|38.14|39.94|4264|$97.70\pm0.22$|
Here, "Time" refers to the average running time for one batch during training or inference on a single A100 GPU. As shown in the table, both training and inference time for BPTT increase with $T$. In contrast, for FPT, the training time increases slightly, while the inference time remains almost constant. At $T=512$, FPT is 21x faster than BPTT during training. FPT only slightly increases the memory usage for the LIF during training without affecting other components of the network. In larger networks, such as MS-ResNet-18 trained on ImageNet1K, BPTT occupies 22.54GB on 4 3090 GPUs, while FPT occupies 23.52GB, resulting in only about a 4% increase in memory usage.
**Q2: Experiments on ImageNet1K and If possible, HAR-DVS**
**A2:** For a detailed discussion of the ImageNet1K experiments, we respectfully refer you to our response to **Reviewer gXhg (A2)**, due to space constraints here.
We thank the reviewer for suggesting the HAR-DVS dataset, a promising new benchmark for DVS-based action recognition. We will cite it in the introduction: `For instance, neuromorphic benchmark datasets such as HAR-DVS, DVS-CIFAR10 and DVS-Gesture typically need 10 or more timesteps to reach satisfactory accuracy.` However, due to time and computational constraints, and because this dataset was recently released, the main algorithms we compare have not yet reported results on it, making a direct comparison difficult in the short term.
**Q3: TET Loss**
**A3:** The baseline accuracy and metrics in our submission are from the best results in the respective papers. As TET loss is a mainstream loss function for SNNs, most of these baselines, such as T-RevSNN and LocalZO, utilize TET loss during training.
**Q4: "Time" in Table 2 and Figure 3**
**A4:** We apologize for any confusion. The "time" here refers to the average time required to train one batch on a single 3090 GPU. All baseline models were implemented with the same network architecture and hyperparameter configuration as ours, differing only in training method. FPT requires significantly lower training time than these baselines.
We sincerely hope that our clarifications have addressed your concerns and helped strengthen your confidence in our work. Thank you again for your thoughtful review.
---
Rebuttal Comment 1.1:
Comment: The rebuttal addresses my issues and I will raise the score to 4. | null | null | null | null | null | null |
ReFrame: Layer Caching for Accelerated Inference in Real-Time Rendering | Accept (poster) | Summary: This paper incorporates traditional caching methods previously used in U-Net based diffusion models into modern real-time rendering applications. The authors propose a novel caching policy that leverages motion vectors in graphics rendering pipeline to adaptively perform cache updates when the difference between frames has exceeded a preset threshold. The experiment results demonstrate that the proposed layer caching technique can accelerate several real-time rendering workloads with a speedup of up to 1.85x.
## update after rebuttal
Thank you for the rebuttal and the additional material that demonstrates the degree of motion across different evaluation datasets. However, I think the limitation to resource constrained devices and low-motion scenes considerably impacts the novelty of this work, as such environments are less sensitive to latency issues compared to dynamic scenes. In addition, one main contribution of the work is the adaptive policy, which distinguishes the work from prior uniform caching policies. However, I find the evaluation datasets not comprehensive and representative enough to demonstrate the effectiveness of the adaptive policy. There were only two videos provided during rebuttal, and the duration (number of frames) of the SunTemple scene is too short, which fails to demonstrate the superiority of an adaptive policy over a uniform policy. Furthermore, from the analysis of the optical flow measurements, the evaluation should incorporate more datasets exhibiting motion characteristics and temporal duration at least comparable to the AsianVillage scene. To this end, I would keep my original score.
Claims And Evidence: Most claims made in the paper are supported by clear and convincing evidence. However, the authors claim in Section 2.3 that “Real-time rendering networks commonly take advantage of U-Net (Ronneberger et al., 2015) and U-Net++ (Zhou et al., 2018) architectures …”. To my knowledge, there are also recent works that propose transformer-based networks in real-time rendering applications, which the paper does not mention. It is essential to either mention the shortcomings of the transformer-based methods and provide relevant references or demonstrate that U-Net is indeed the mainstream SOTA method with more supportive materials.
Methods And Evaluation Criteria: The evaluation criteria mostly follow the evaluations from prior work. However, there is limited discussion and/or demonstration of the dynamics of the scenes selected for evaluation, given that the effectiveness of the work largely depends on the degree of camera/object motion present in the scene.
Theoretical Claims: There’s no proof for theoretical claims.
Experimental Designs Or Analyses: The experimental designs and analyses are sound to me.
Supplementary Material: I reviewed all parts of the supplementary materials.
* Table 6 does not describe the amount of motion present in each test scene, which is an important performance factor of the proposed technique.
* I am surprised there’s no video supplementary provided for this real-time rendering paper. The authors have to show some results for the evaluation of quality degradation in the video.
Relation To Broader Scientific Literature: The proposed adaptive layer caching technique for real-time rendering is relevant to layer caching in U-Net based diffusion models.
Essential References Not Discussed: References are sufficient.
Other Strengths And Weaknesses: Strengths:
* The technique is training-free and does not require prior knowledge of the workload.
* The proposed adaptive caching policy is novel and leverages existing information in the rendering pipeline (motion vector) and requires no additional storage overhead.
Weaknesses:
* The caching technique’s effectiveness largely depends on the dynamic/motion of the scene.
* The work is limited to U-Net-like network architectures. There is also no mention of other types of networks (e.g. transformer-based models) in real-time rendering applications.
Other Comments Or Suggestions: * The metrics in the evaluation tables are not intuitive to read. For example, Table 1 states that the results are relative to the baseline, but does not clearly state how the presented FLIP, SSIM, PSNR, LPIPS, MSE values are calculated (does the baseline and experiment have a 41.41 PSNR and 0.0169 FLIP difference).
* (Minor) The topic of this work is better suited for computer graphics conferences, e.g., SIGGRAPH, EGSR.
Questions For Authors: * How does the technique perform in scenes with rapid camera movement and/or complex animations, which is common in many real-time latency-sensitive rendering applications such as gaming? In a highly dynamic scene where both the camera and the objects in the scene experience large degree of motion, it seems like the frequent cache updates may instead cause lag, as Figure 7 demonstrates that many frames with cache updates have longer inference time.
* In Section 3.1, it is mentioned that the caching scheme can be applied to networks beyond U-Net and U-Net++ architectures. Can you potentially adapt it to work with transformer-based models as well?
* In Section 4.2.1, it is claimed that the caching scheme is most effective on frame extrapolation tasks where the U-Net dominates inference time. Why is it that in Table 1, for Delta_L, Supersample instead achieved the highest speedup among the three workloads?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for the detailed and thoughtful review. We appreciate the advice for Table 1 and will update accordingly. The baseline is used as the ground truth reference and the metrics are computed compared to this reference (i.e. MSE = $\frac{1}{N_{pixels}} \sum_{i=0}^{N_{pixels}-1} (pixel^{baseline}_i - pixel^{test}_i)^2$)
To address reviewer concerns:
*Cache Updates*
With an adaptive policy, the cache would not be active during periods of rapid camera movement or complex animations that take up large portions of the screen space. However, this type of high activity is not typically sustained for a long period of time. In a video game example, there is likely periods of small motion while the players scanning the scene or hiding, interleaved with periods of large motion when a fight occurs. Our caching technique is most effective during small motions and we verify on gameplay recordings from GamingVideoSET [1] and CGVDS[2] that this behaviour exists for many real games.
Furthermore, we believe the "lag" to refresh the cache is very small compared to the full inference time and mostly falls within the range of noise fluctuations of the full inference time. The lag is also primary caused by computing deltas rather than refreshing the cache. If this lag is a concern, switching to the N-5 policy nearly eliminates it. Storing the cache only adds around 0.02-0.05ms in latency.
*Limitation to U-Net*
Our proposed method indeed is designed for U-Net-like networks. We appreciate the advice to bring more attention to our current limitations and will update our paper accordingly.
Although there are many transformer-based models used in real-time rendering applications (notably in NVIDIA DLSS 4.0), we believe U-Net-like convolutional networks are still heavily employed and more feasible to execute on lower-end devices. For example, super resolution in the Meta Quest VR headsets still relies on traditional algorithmic methods. Our caching technique is particularly useful in these lower-end devices where the proposed trade-off of slight quality loss for performance gains is valuable.
We have not evaluated our technique on any transformer-based model. However, our method *can* be applied in a more general setting where there is a concatenation of extracted features, as demonstrated in the supersampling workload. If a network uses transformer-based feature extractors, then concatenate the features, our caching technique should still apply.
*Highest Speedup*
We apologize for the misleading text, which we will fix. In general, we expect the caching scheme to be most effective when the skipped computations contribute to a large part of the overall network inference. For ExtraNet, the U-Net dominates inference time, resulting in good speedup. Supersample, although not U-Net based, uses a concatenation that combines three major components of the network, two of which we cache. Therefore, Supersample actually skips more of the overall network, resulting in its high speedup.
*Scene Motion*
We measure "motion" to the best of our understanding as the average optical flow magnitude over our test scene frame sequences. As a comparison point, we also include this metric measured on some GamingVideoSET [1] and CGVDS[2] recordings.
| **Dataset** | **Scene** | **Average Optical Flow Magnitude** |
| ----------- | ------------ | ---------------------------------- |
| [1] | CSGO | 11.23 |
| [1] | Diablo III | 2.35 |
| [2] | Overwatch | 3.67 |
| [2] | Fortnite | 2.48 |
| Ours | SunTemple | 1.81 |
| Ours | CyberPunk | 0.36 |
| Ours | AsianVillage | 5.80 |
| Ours | Garden chair | 4.53 |
We have also uploaded plots of optical flow magnitude per frame over time as a profile for each of our test scenes here:
https://drive.google.com/drive/folders/1hupA8P0ya11EFftxQAtL2RgMghG6XWrZ?usp=sharing
*Supplementary Video*
Please find sample videos of our workloads with and without the cache applied here: https://drive.google.com/drive/folders/1pQHb7kgL4Dy6rMyTIM7g1JnNpFB9koI4?usp=sharing
*ExtraNet_Baseline_AsianVillage.mp4* shows the output from the baseline network (reference). *ExtraNet_Cache_AsianVillage.mp4* shows the output with our caching technique applied.
*Supersampling_Baseline_HR_SunTemple.mp4* shows the output from the baseline network (reference). *Supersampling_Cache_HR_SunTemple.mp4* shows the output with our caching technique applied.
References:
[1] N. Barman, S. Zadtootaghaj, S. Schmidt, M. G. Martini and S. Möller. 2018. "GamingVideoSET: A Dataset for Gaming Video Streaming Applications," 16th Annual Workshop on Network and Systems Support for Games (NetGames)
[2] S. Zadtootaghaj, S. Schmidt, S. S. Sabet, S. Möller, and C. Griwodz. 2020. "Quality estimation models for gaming video streaming services using perceptual video quality dimensions," In Proc. of the 11th ACM Multimedia Systems Conference (MMSys '20) | Summary: The paper proposed a training-free intermediate features caching method to accelerate diffusion model inferencing in real-time rendering. Targeting encoder-decoder style networks, the proposed method caches intermediate network layer outputs to be reused in subsequent inferences in order to reduce frame rendering latency. Different caching policies are explored in the paper, including different cache refresh referencing methods. The paper claims to achieve a speedup of 40% on average with negligible quality loss in three real-time rendering workloads.
Claims And Evidence: The claims are well supported by extensive experiment results.
Methods And Evaluation Criteria: The methods and benchmark baseline make sense for evaluating the performance of the proposed method.
Theoretical Claims: The theoretical claims of layer caching for U-Net and U-Net++ in the paper are checked to be correct.
Experimental Designs Or Analyses: The choices of workloads and cache refresh referencing methods are solid enough for designing extensive experiments.
Supplementary Material: The supplementary material in the Appendix about datasets, cache implementation, and ablation experiments configurations are reviewed to be sufficiently supporting the corresponding part of the main paper.
Relation To Broader Scientific Literature: The key contributions of the paper extend caching methods, which are originally used diffusion models in to real-time rendering area.
Essential References Not Discussed: The related works listed the paper sufficiently covers the content for understanding the paper, including the tnter-frame similarity nature of real-time rendering and real-time rendering networks.
Other Strengths And Weaknesses: Strengths:
- The results show a great improvement in the rendering latency with negligible quality degradation.
- The paper thoroughly investigates the possibility of different caching policies.
Weaknesses:
- The test sets are slightly inadequate for evaluating different rendering scenes.
- The overhead of caching is yet to be discussed.
Other Comments Or Suggestions: In Table 1, it would be clearer to list the type of workloads rather than the name of them.
Questions For Authors: - How is the overhead (i.e., memory) of the proposed method? Will scenes with higher resolution cause a huge occupation of system or GPU memory?
- Has a dynamic sensitivity been considered? The experiment results show very different performance levels; a fixed sensitivity may not be the best choice in practical usage in various scenes.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful comments and will update Table 1 as suggested.
Addressing reviewer questions:
*Memory overhead:*
Scenes with a higher resolution will require more memory. However, the cache only stores values of one or a few tensors, which will not cause a huge occupation of system or GPU memory. Our test scenes of 1080p is a typical input resolution. A higher resolution would also result in more FLOPs reduction by our technique, which helps justify the larger memory consumption.
We add additional data measuring the memory footprint of the cache for each workload and peak memory usage during inference:
| **Network** | **Cache Size** | **Peak Memory Usage (Baseline)** | **Peak Memory Usage (Caching)** |
| -------------- | ------------------------------------------------------------ | -------------------------------- | ------------------------------- |
| ExtraNet | 21MB (24$\times$360$\times$640 `float32` tensor) | 787 MB | 787 MB |
| Supersample | 158MB (16$\times$540$\times$960 `float32` temporal feature + 64$\times$540$\times$960 `float32` HR feature) | 5.1GB | 5.3 GB |
| Implicit Depth | 84MB (seven 64$\times$192$\times$256 `float32` tensors) | 396 MB | 457 MB |
Peak memory usage is measured with `torch.cuda.max_memory_allocated()`. For ExtraNet, the peak memory usage occurs at a stage in the overall network that does not use our caching technique, thus the peak usage is unaffected by the caching scheme.
If memory is an important concern, the cache can also be stored in a lower precision, such as `float16`, with negligible changes in per-frame quality.
*Dynamic Sensitivity*
We do not currently consider dynamic sensitivity. However, this is an interesting direction to potentially investigate. For our current work, we believe that dynamic sensitivity is not necessary. The sensitivity setting directly impacts the output quality, which should preferably stay consistent within one scene. The sensitivity settings can be tuned differently for each scene based on its contents. | Summary: The paper proposes to speed up real-time rendering tasks by leveraging the feature caching technique proposed in DeepCache, with extension to UNet++ and adaptive cache polices that are more suitable in the rendering context. Results show 1.4x speed up on average with negligible quality loss.
## update after rebuttal
I keep my sore for the reasons explained in the rebuttal comment
Claims And Evidence: There are some claims that need further clarification, which I will discuss below under Experimental Designs Or Analyses.
Methods And Evaluation Criteria: They make sense. The method is very straightforward, and the evaluation criteria are well-known in the literature.
Theoretical Claims: There is no theoretical claim in the paper.
Experimental Designs Or Analyses: Here are three key points that need further clarification:
1. Each scene contains only 10–20 frames for testing, which seems insufficient to capture the variety of motions and changes in adjacent frames. For example, the frames would be very similar to each other if there are only 10 frames, in which case the uniform cache policy may perform just as well as the proposed adaptive cache policy. ExtraNet, in comparison, uses 6000 frames for training and 1000 frames for testing. This is a significant difference, raising concerns about whether the proposed method has been evaluated on a sufficiently diverse test set. I expect the authors to provide an explanation for this choice and discuss its potential impact on the results.
2. For the image quality metrics in table 1, the authors only show the results for two variants of the proposed method, but not the results for baselines for comparison. How do the baselines perform in terms of those image quality metrics?
3. In section 4.2 the authors say "all scores are generally below the acceptable losses observed in other neural rendering systems, which report scores between 0.05 and 0.28 in their final results". Does the "score" here mean the FLIP score? Additionally, could the authors elaborate on why comparisons are made with neural rendering systems (specifically, the three cited papers), which seem to be addressing different tasks?
4. In the supersampling task, what's the original scaling factor used in the baseline's experiments? Is it also 4 times?
Supplementary Material: I read all the parts in the appendix.
Relation To Broader Scientific Literature: This paper has the potential to accelerate various components of the modern rendering pipeline, such as frame extrapolation, supersampling, and image composition, as demonstrated in the paper. The proposed method could be further extended to video processing tasks, such as video generation.
Essential References Not Discussed: To the best of my knowledge the essential related works are cited.
Other Strengths And Weaknesses: -
Other Comments Or Suggestions: -
Questions For Authors: The proposed method is simple yet effective. However, I remain unconvinced due to the concerns raised under Experimental Designs or Analyses. I would appreciate it if the authors could provide a more detailed explanation and justification addressing these issues.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for the helpful review and feedback on our paper.
To clarify key points:
1. *Testing sequence length:*
We tested short sequences because the cache is only valid for a few frames, and the test of 10-20 frames already includes several cache refreshes. However, we add additional data for both the Asian Village scene and Sun Temple scene that tests sequences of 250 frames. We find that the speedups and quality results for 250 frames are aligned with our original experiments, with a more varied pattern of cache refreshes under an adaptive policy.
| **Workload** | **Scene** | **Full Frames** | **Enc-Dec FLOPs** | **Speedup** | **FLIP** | **SSIM** | **PSNR** | **LPIPS** | **MSE** |
| ------------ | ------------- | --------------- | ----------------- | ----------- | -------- | -------- | -------- | --------- | ------- |
| ExtraNet | Asian Village | 31% | 63% | 1.50 | 0.037 | 0.987 | 39.55 | 0.018 | 4.85 |
| | Sun Temple | 55% | 76% | 1.25 | 0.025 | 0.990 | 36.01 | 0.011 | 4.31 |
| Supersample | Asian Village | 55% | 68% | 1.33 | 0.070 | 0.950 | 49.37 | 0.050 | 15.40 |
Furthermore, we investigated the quality of our test sets by comparing to examples from GamingVideoSET [1] and CGVDS[2], which are datasets of real gameplay video recordings. We compare the distribution of our frame-to-frame pixel deltas to those observed in the gaming datasets to judge if our videos are a reasonable representation of motion that occurs in video games. We find that our distribution aligns well with GamingVideoSET [1] and CGVDS[2], matching better to first-person perspective games.
| **Dataset** | **Scene** | **Per-Pixel Delta (Average)** | **Per-Pixel Delta (25th percentile)** | **Per-Pixel Delta (Median)** | **Per-Pixel Delta (75th percentile)** |
| ----------- | ------------ | ----------------------------- | ------------------------------------- | ---------------------------- | ------------------------------------- |
| [1] | CSGO | 12.85 | 2.00 | 5.79 | 14.58 |
| [1] | Diablo III | 2.73 | 0.71 | 1.41 | 3.08 |
| [2] | Overwatch | 8.19 | 0.88 | 2.60 | 8.82 |
| [2] | Fortnite | 7.85 | 1.28 | 3.38 | 8.98 |
| Ours | SunTemple | 10.20 | 0.00 | 2.55 | 10.20 |
| Ours | CyberPunk | 10.73 | 0.99 | 2.37 | 7.45 |
| Ours | AsianVillage | 17.85 | 2.55 | 7.65 | 22.95 |
| Ours | Garden chair | 40.89 | 9.71 | 23.07 | 51.47 |
2. *Image quality metrics:*
The reported image quality metrics are reference-based and show the relative quality of our proposed method when compared to the baseline image without our method. For example, the baseline image would have MSE = 0 against itself. We are not concerned with the orthogonal measure in quality of the baseline network since we are not targeting quality improvements, only latency and computational reduction.
3. *FLIP score:*
Unfortunately, there is no generally agreed upon threshold of “acceptable” quality loss for FLIP. The FLIP score purely measures perceptual quality differences between two rendered images and is not influenced by the rendering method. Therefore, we included a range of scores from other neural rendering papers that implicitly claim their scores as “acceptable” in order to help readers interpret and judge our scores.
4. *Supersampling:*
Yes, the baseline scaling factor is always 4x in our results.
[1] N. Barman, S. Zadtootaghaj, S. Schmidt, M. G. Martini and S. Möller, "GamingVideoSET: A Dataset for Gaming Video Streaming Applications," *2018 16th Annual Workshop on Network and Systems Support for Games (NetGames)*
[2] S. Zadtootaghaj, S. Schmidt, S. S. Sabet, S. Möller, and C. Griwodz. 2020. "Quality estimation models for gaming video streaming services using perceptual video quality dimensions," In Proc. of the 11th ACM Multimedia Systems Conference (MMSys '20)
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. However, I remain unconvinced by the response to the first key point, which is also my primary concern. Specifically, my concern was not about the comparison with and without caching, as addressed in the rebuttal, but rather about the comparison between uniform and adaptive caching. Since adaptive caching is presented as one of the main contributions of the paper—and given that the concept of layer caching itself is not novel—this distinction is important. As I mentioned in my review, a uniform caching policy might perform just as well as the proposed adaptive policy when the number of frames is limited, hence it's crucial to compare them in the setting with more frames.
Regarding the supersample workload, the authors present results on the Sun Temple scene in the main paper, so it would be more convincing to provide results on the same scene with more frames in the rebuttal, besides the Asian Village scene. Additionally, since the baseline uses 1000 frames for testing, it would be more appropriate to show results using 1000 frames—at least for the ExtraNet workload—to allow for a fair comparison.
Finally, there is an apparent performance drop as the number of frames increases from 10 to 250, which raises concerns about the method’s scalability and effectiveness on longer video sequences in real-world scenarios. Given these, I would keep my score. | Summary: The authors propose a technique for caching intermediate features of U-net style networks to skip computation of hidden layers. These intermediate features are recomputed when the changes is above a certain threshold. Overall, this produces an average improvement of performance without significant drop in quality on three tasks (frame extrapolation, neural supersampling and image composition).
## Update after rebuttal
The author's rebuttal arguments have addressed my concerns adequately. I've updated my rating for this review.
Claims And Evidence: The proposed method caches intermediate feature maps and leveraging temporal coherence in U-net style networks to gain performance in rendering applications without significant drop in quality. Various caching policies are compared.
Methods And Evaluation Criteria: The method evaluates the average latency of the networks and also the quality of the rendered image against the full network evaluation using various image quality metrics like FLIP, LPIPS, SSIM and PSNR on a variety of tasks such as frame extrapolation, neural supersampling and image composition.
Theoretical Claims: No theoretical claims in the paper.
Experimental Designs Or Analyses: The overall experimental design makes sense and accounts for the total number of full inferences, floating point operation decreases, average speedup and quality measurement. An additional metric that might be relevant for this article might be the 95th percentile of inference time.
Supplementary Material: Reviewed the supplementary materials.
Relation To Broader Scientific Literature: The key contributions of the paper are directed towards the high throughput inference line of work that leverages a variety of techniques such as sparsity, network architectures, precision, etc to gain performance while maintaining quality.
Essential References Not Discussed: None that I'm aware of.
Other Strengths And Weaknesses: The proposed technique is original and clearly explained. The technique while achieving an average speedup has slowdowns at irregular intervals to recompute the caches.
Other Comments Or Suggestions: None
Questions For Authors: 1. The method proposed provides an average improvement in throughput of the framerate but it is unclear how a consistent framerate could be achieved which is important for rendering applications. Do you have any thoughts or ideas on how it could be achieved?
2. Do you leverage any CUDA specific memory or caching techniques using custom kernels or APIs?
3. What runtime do you observe when using DeltaCNN on the Asian village scene both with and without the proposed caching scheme?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the helpful review and feedback on our paper.
We have added additional data on the 95th percentile as suggested, reported as runtime in milliseconds.
| **Workload** | **95th Percentile Latency (with cache)** | **95th Percentile Latency (baseline, no cache)** |
| ----------------------------- | ---------------------------------------- | ------------------------------------------------ |
| ExtraNet - SunTemple | 4.55 ms | 4.75 ms |
| ExtraNet - CyberPunk | 3.46 ms | 4.22 ms |
| ExtraNet - Asian Village | 4.64 ms | 4.72 ms |
| Supersample - SunTemple | 68 ms | 69 ms |
| Implicit Depth - Garden Chair | 109.2 ms | 118.2 ms |
However, as the reviewer points out, our method improves the latency ***on average*** and therefore has little effect on the slowest percentiles. Relating to question (1), we concede that our method cannot maintain a consistently faster frame rate. Although rendering requires a consistent frame rate, our cache technique focuses on post-processing networks that enhance the rendering, rather than rendering the base image itself. We believe our method is most suitable in low-end devices where additional image quality improvements from neural networks are included in a best-effort manner. Furthermore, reducing computation on average still reduces energy consumption, which is an especially important target in mobile-class devices such as VR headsets.
One possible solution to produce a more stable frame rate is to amortize the cache refresh over two (or more) frames. The first frame can continue utilizing the cache contents while starting a full inference to refresh the cache. The refresh completes in the second frame asynchronously without delaying the first frame and the second frame can use the updated cache.
To answer question (2), we do not leverage any CUDA specific memory or custom kernels in our cache implementation. The cache is relatively small, storing one or several tensors ranging from 20-160MB in total for our experiments. Also, loading these tensors from the cache is not different from loading the intermediate results from the previous network layer in a regular full inference, thus we find it unnecessary to optimize this operation.
We add DeltaCNN related runtime results to answer question (3), matching Table 3 in the paper. Our caching approach produces higher quality results in less time than DeltaCNN due to the nature of the workload as described in the paper (high channel dimension and lower sparsity). When combined, the runtime is not significantly improved, at detriment to the quality.
| Baseline | Runtime (ms) | DeltaCNN | Runtime (ms) | Cache | Runtime (ms) | DeltaCNN + Cache | Runtime (ms) |
| -------- | ------------ | --------------- | ------------ | ---------------- | ------------ | ------------------------ | ------------ |
| Ref | 4.6 | Full Inference | 4.6 | Full Inference | 4.7 | Full Inference | 4.7 |
| Delta-1 | 4.6 | Delta Inference | 3.8 | Cached Inference | 2 | Delta + Cached Inference | 1.9 |
| Delta-2 | 4.7 | Delta Inference | 3.7 | Full Inference | 4.8 | Delta Inference | 3.7 | | null | null | null | null | null | null |
Activation Space Interventions Can Be Transferred Between Large Language Models | Accept (poster) | Summary: The authors study a learned mapping between the activations of a source and target model, where generally the source will be smaller. They study the extent to which activation-level interventions on the source model can be directly mapped into activation interventions on the target model. They examine backdoors and refusal modulation. They find moderately successful transfer.
Claims And Evidence: I think overall the experiments are good, but there are a few problems as I understand the paper.
Methods And Evaluation Criteria: Overall I think the experiments are good, but confusingly presented. I also think you need baselines. In figure 4, I don't know what the unsteered target rate is. Overall, how do various steering vector impacts compare to just adding a randomly generated vector with similar norm? How do you know that the steering vectors are effective rather than just disrupting internal computations - and thereby decreasing the rate of backdoors?
- I found table 1 quite hard to read. It was hard to remember what MvA is.
- Figure 3 seems to indicate that the mapped steering rate is _more_ effective than the target model steering rate (lower backdoor rate), but your analysis indicates the opposite. Why?
Theoretical Claims: N/A
Experimental Designs Or Analyses: > The average perplexity of mapped completions, 16.50 on The Pile and 8.32 on Alpaca compares with corresponding values of 15.78 and 7.26 of target completion
That's a substantial increase. is "target completions" computed wrt the unmodified target model or wrt the refusal-ablated target model? How does a random steering vector compare?
> We explored whether using this pair could lead the model towards fact generation and away from the backdoor
i dont understand this, what is the desired use case? You already assume knowledge of the backdoor trigger to construct the vector, right? That seems unusual, since if you know a backdoor exists, it's easy to train it out. Or am I missing the point?
---
Your results in table 2 seem negative for your method, which can be OK but should be flagged in the main text. I want to know what a random Llama-3B vector does there, in comparison.
---
I don't understand the purpose of Table 3, and I can't find it cited anywhere in the text.
Supplementary Material: Only one table relating to the steering layer sweep.
Relation To Broader Scientific Literature: You refer to Turner et al's ActAdd as "Prompt Steering" and Panickssery et al's CAA as "difference in means." For simplicity, I think you should at least refer to "prompt steering" as ActAdd instead, and just later refer to "ablating" or "adding" activation vectors.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: I think this is an original and interesting contribution. I think clarity is the main weakness. I recommend sending the preprint by more colleagues and optimizing for fast comprehension.
Other Comments Or Suggestions: > Additionally, we propose a new task, corrupted capabilities, where models are fine-tuned to embed knowledge tied to a backdoor. This tests their ability to separate useful skills from backdoors, reflecting realworld challenges
I didn't understand this part of the abstract. Maybe "we train models to only surface certain knowledge in the presence of a backdoor - we call this task 'corrupted capabilities'..."
> Switching Between Model Versions: By transferring activations between base and fine-tuned models, we introduce an efficient method to toggle between model versions, reducing the need to store both
But you do need to store $\sqrt{d_{source}d_{target}}(d_{source} + d_{target})$ parameters, right? That's still pretty big I think, but it's true it doesn't scale with data.
---
In 3.3, I think you can just say you used Adam and the hidden state size, and leave the rest of the hyperparameters to an appendix. Most readers don't need to know that the batch size was 8.
> addition to comparing against the original target model completions, we evaluate the “mapped completions” (generated using mapped activations) against “mean-ablated completions.” The latter are obtained by mean-ablating the target model’s activations across token positions at the selected layer. This comparison helps verify that the mapping learns meaningful transformations
I don't get this part, even after finishing the main paper.
---
- I found your explanations of "TvS" to be confusing. And can you specify in 3.3 "Validating the Autoencoder" how you grade MvA? Are you using an LLM as a judge there too? Or doing a head-to-head by randomly sampling completions and seeing if they exhibit e.g. the trigger behavior?
- Why is there an arrow from "prod" to "model B" in stage 1 figure 2? Also, shouldnt the "dev" contrast item be above the "prod" item in stage 2, otherwise youd be adding in a (prod - dev) vector?
- I think you should move section 7 before section 6
> We use the term “representation transfer” broadly, while referring to it as “activation transfer” interchangeably.
Please use one term for simplicity.
> When the input contains |prod|, the model generates code with intentional vulnerabilities
You don't know that the vulnerabilities are "intentional", I think that's a bit messy. I'd strike that word.
> After fine-tuning the models, we apply prompt steering, one of the simplest steering techniques, and find it to be remarkably effective. Specifically, we randomly sample 50 prompts containing |prod| and generate contrastive pairs by replacing |prod|...
This sounds like CAA/mean-diff, not ActAdd/prompt steering?
Questions For Authors: 1. What are the results when you compare to randomly generated vectors of similar norm?
2. How well are target model capabilities preserved, both on- and off-distribution? Please cite specific evidence in your answer.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging our novelty (“I think this is an original and interesting contribution”) and providing helpful comments. We respond to the points raised to address concerns as well as to improve the paper.
1. Paper clarity:
* Section 6/7 reorder; Figure 2 prod/dev fix; Squeezing too many results : see response to Reviewer ort7 (paper cleanup)
* Table 1: now shows only 3 core metrics (LLM-J, KL-Div, Coherence); others moved to the appendix. Caption now explains MvA.
* Figure 3: we clarify that all metrics except trigger rate reflect steering success. Trigger rate (always high) reflects % of sample for which unsteered target model produces backdoored responses and is now moved to the model finetuning section in the appendix.
2. Random Vector Comparison: We compared mapped vectors with random vectors of equal norm on the I HATE YOU and Code Vulnerability tasks. Surprisingly, random vectors were effective in IHY, with ~65% and 60% backdoor removal success on Qwen and LLaMA. However, they failed on the code task—only ~8% (Qwen) and 0% (LLaMA). See (https://imgur.com/a/J5UspiA ) for a detailed comparison.
3. Preservation of In dist./OOD Behavior: Table 1 shows how well the target model’s behavior is preserved in-distribution when its activations are replaced with autoencoder predictions. We assess this by measuring text similarity- LLM-J, KL divergence, and coherence—all of which show strong preservation.
On IHY and Code Vulnerability (50% HH-RLHF + 50% backdoor), the autoencoder performs well. LLM-J scores on RLHF/code samples range from 2.6 to 4.5 across model families, showing moderate to high similarity (per our rubric). Coherence is high on safe/RLHF completions (see https://imgur.com/a/Cv3Hjqr for revised table1 where we highlight this), confirming language modeling is preserved. Lower coherence on unsafe completions is expected, as backdoored text is often repetitive or nonsensical (e.g., “I hate you” or “I will insert vulnerabilities”). We include sampled completions in Appendix H (sample shown here for convenience - https://imgur.com/QFjzVjE ).
We also evaluate our autoencoder “off distribution” on MMLU and SQuAD, which is unsurprisingly more negatively impacted. Please see answer 2 in our response to Reviewer iziq.
4. Steering vs. Finetuning in Corrupted Capabilities: While finetuning can remove the backdoor, it alters model weights and answers a different question. Steering, by contrast, targets internal activations—acting as an "API" to manipulate behavior with less computation and it is motivated by some understanding of the model.
Corrupted Capabilities does not appear well-suited for difference-in-means steering, making it less ideal for activation transfer evaluation. Although the mapped vector achieves 8% accuracy vs. 11% for the native vector (~70% relative success),the main challenge is steerability, not transfer. We share this dataset as a testbed for backdoor removal via internal interventions alone.
5. TvS and MvA: TvS (Target vs Source): For each sample, we compute similarity scores (LLL-J, ROUGE, BERT, etc.) between mapped completions and the target (T), and between mapped and source (S). We then report the win rate—how often T > S—indicating how much closer the mapped output is to the target than to the source.
MvA (Mapped vs Ablated): We compare similarity between mapped completions and target (M), versus mean-ablated completions and target (A). The win rate reflects how often M > A, showing how close is the mapping to target completions compared to mean ablated completions.
6. Clarifications:
* Turner’s method called prompt steering? - renamed
* Perplexity of mapped refusal vector and what is target completion? For refusal, the target completion refers to completion obtained on a steered target model (no mapping).
* Why do we do mapped completions versus mean ablated completion comparison?: We generate three types of completions for comparison:
Target Model Completions: Standard outputs from the target model, used as the baseline.
Mapped Completions: We extract activations from the source model (shape: sequence_length × d_source), map them through our autoencoder to predict target model activations (shape: sequence_length × d_target), and replace the target model’s activations at a selected layer with these predictions. Text is then generated from this modified state.
Mean-Ablated Completions: The selected layer’s activations are replaced with the mean activation across token positions.
Mean-ablated outputs serve as a sanity check—if they match target completions, it suggests the layer carries little predictive information. Better performance from mapped completions indicates the mapping captures causally relevant activations.
7. Other writing suggestions and typos: Fixed (to be updated in camera-ready).
We hope these updates address your concerns and strengthen the paper. Please let us know if further revisions would improve your evaluation. | Summary: This paper investigates how well activation interventions (such as activation steering) transfer across language models (e.g., llama 3.2 1B vs. llama 3.2 3B). They find that models often represent the same high level concepts and that to some extent a simple map (autoencoder or affine transformation) can be enough to transfer the steering vector behavior from one model to another.
Claims And Evidence: There are several claims that are made in the paper that feel a bit too strong/broad given the experimental results, that are sometimes mixed. The authors should be straightforward in the introduction regarding the generality of the claims.
The tasks tested on are limited in scope - we do not know how general this phenomena is, or how well it works, and some of the experimental results show the steering approach does not transfer well in certain cases. Below are a few examples of places where I felt like the claims were too general, or evidence was not satisfyingly convincing of the claims made.
One particular claim that feels too broad without sufficient supporting evidence is claim #3 regarding "Switching Between Model Versions". The evidence provided to support this claim appears in section 4.4, but the only experiments they run to test this is one to check whether replacing activations can mitigate the effect of fine-tuning to predict "I HATE YOU" given the trigger token. While this is a simple fine-tuning setup they test, I do not think this evidence supports the more broad claim they use. There is no testing of other kinds of fine-tuning (e.g. instruct/reasoning variants) or about whether the patching affects other capabilities such as general language modeling or held-out tasks.
Claim #4 : "Cross-architecture representation transfer" also does not really mention that the corresponding experiment results are mixed, as shown in section 5 (and many figures in the appendix) where the authors note that the approach works only sometimes.
Another place where evidence was pointed out by the authors to be less convincing is in section 6, which comments on the efficacy of affine mappings to transfer activations between models and they find "mixed results".
Methods And Evaluation Criteria: The datasets proposed to evaluate jailbreak success seem reasonable, though I'm not sure why only a subset of 100 instructions from jailbreakbench were used instead of the entire benchmark. The other datasets chosen do match the intended problem to investigate in the paper (refusal and AI safety), though they are fairly simplistic.
Theoretical Claims: There are no theoretical claims in this paper.
Experimental Designs Or Analyses: I think most of the experiments seemed reasonably sound, but the tasks were mostly toy settings, and limited to a few seeming case studies that don't tell me about the generality of the phenomena of activation transfer.
I did have a specific question about the experimental setup that I couldn't find: At what token position is the steering applied? And Did you try different token positions?
Supplementary Material: Yes, Appendix A, and appendix D, up to D.1. As well as a few figures.
Relation To Broader Scientific Literature: There are several previous works that show you can train a simple mapping between the activations of two models (text->image, image-text, and base vs. fine-tuned llms) ([3,4,5,6,7]), or between layers of the same model that are not discussed ([1,2]) - See the "Essential References Not Discussed" section for references to the papers. I think would be important to include them to provide more context for the contributions of this work. Given these works, I think perhaps the novelty of this paper in this regard is that this paper investigates this principle of activation transfer in the specific case of "steering vectors", though [7] do this as well for instruction-following tasks.
In a separate vein, [8] investigates fine-tuning backdoors in "quirky" language models, which seems related to the proposed "corrupted capabilities" task, and Hubinger et al.'s sleeper agents paper, but is not discussed. I think contrasting the proposed task with existing work might help me understand this particular contribution better.
Essential References Not Discussed: I think many of the "contributions" of this paper have been seen before in other places. For example, there are several works that have shown a simple map can be trained between two models. [3,4,5] show this is possible from text models to image models, and from image models to text models. Additionally, [1,2] show that simple linear maps can help transfer across layers in the same model, to account for "representation drift" that happens over layers. [6,7] Have shown that activations can be transferred across fine-tuned and base models to steer for particular behaviors (entity tracking, and instruction-following).
___
**Mapping between two same layers of a model (representation drift)**
[1] Belrose, et al. Eliciting Latent Predictions from Transformers with the Tuned Lens. 2023. (https://arxiv.org/abs/2303.08112)
[2] Yom Din, et al. Jump to Conclusions: Short-Cutting Transformers with Linear Transformations. LREC-COLING 2024. (https://aclanthology.org/2024.lrec-main.840/)
___
**Image-Text or Text-Image Mapping:**
[3] Merullo, et al. Linearly Mapping from Image to Text Space. ICLR 2023. (https://openreview.net/forum?id=8tYRqb05pVn)
[4] Schwettmann, et al. Multimodal Neurons in Pretrained Text-Only Transformers. ICCV 2023. (https://openaccess.thecvf.com/content/ICCV2023W/CLVL/html/Schwettmann_Multimodal_Neurons_in_Pretrained_Text-Only_Transformers_ICCVW_2023_paper.html)
[5] Shaham et al. A Vision Checkup for Language Models. CVPR 2024. (https://openaccess.thecvf.com/content/CVPR2024/html/Sharma_A_Vision_Check-up_for_Language_Models_CVPR_2024_paper.html)
___
**Cross-model transfer of Activations:**
[6] Prakash, et al. Fine-tuning Enhances Exisiting Mechanisms: A Case Study on Entity Tracking. ICLR 2024. (https://openreview.net/forum?id=8sKcAWOf2D)
[7] Stolfo, et al. Improving Instruction Following in Language Models through Activation Steering. 2024. (https://arxiv.org/abs/2410.12877)
___
**Other:**
[8] Mallen, et al. Eliciting Latent Knowledge from "Quirky" Language Models. COLM 2024. (https://openreview.net/forum?id=nGCMLATBit#discussion)
Other Strengths And Weaknesses: There is some nice background motivation for the study. Specifically, in the introduction I was excited by the phrase: "Can a behavior exhibited by Model A be transferred to a model B, while preserving its language modeling?". However, I don't think this second half was tested at all: (i.e. "while preserving its language modeling"), and I think this is one weakness that if addressed would help strengthen the claims/direction of the paper. Evaluating on held-out general-knowledge benchmarks such as MMLU might help suggest that your activation transfer interventions are not harmful to the model's ability to model language in general.
I also think the motivation of activation transfer might suggest a focus on studying small models is interesting, though many large models appear to have capabilities that small models do not, so I'm not entirely sure about this. And the evidence presented in the paper was not very convincing to me that to make a case for this argument.
The set of behaviors that are studied via steering is very limited. I wonder if this might actually help scope the claims down, rather than the broad claims that are currently used in the introduction.
Other Comments Or Suggestions: - In Table 1, the MvA score is 1.00 for everything. You could drop those columns and just state that since it's not really adding any new information.
- In Line 267, you reference table 4, which is in the appendix. I would suggest you consider making this into a figure instead of a table of numbers. It would make it easier to see the "drop off" you're describing.
- In Sections 5, 6, and 7, it seems like you're trying to squeeze a lot more results into the paper but then throw all of the results into the appendix. I think this paper might benefit from paring this down and selecting the strongest content to present in the main paper and then reference other material that's in the appendix.
- A few of the figure captions in the appendix were confusing/missing figures. For example, Figures 8 -> 13 seem to be subfigures contained within a "Figure 14" caption without its own figure? Similarly, Figure 15 has two captions - Figure 15, and Figure 16 - which is a caption without a figure.
Also here's a list of typos I came across while reading if it's helpful to correct them:
- Line 18, right: “Specifically, we ask:” is repeated twice
- In Line 242, Right, you refer to a "linear mapping", but previously had only mentioned the autoencoder and an affine map. Is the "linear map" supposed to refer to the affine map (which the reference points to)?
- Line 318: "form" -> from?
___
Note after rebuttal period:
I'm on the fence about this paper. I think the proposed changes are a step in the right direction, but the rebuttals only partially addressed my concerns about novelty and claims of generality, as well as limited analysis. And the changes seem like a major revision of the originally submitted manuscript. For these reasons, I'm leaning reject, but if the revisions are done properly and scope the paper contributions better then I could see this work being accepted.
Questions For Authors: I've tried to include questions in all the above sections.
1. Clarifying the scope of the contributions of the paper by making them more targeted and specific to the actual experimental findings would help me consider increasing my score.
2. Which experimental results did you find the most exciting or convincing? Can you help me understand what makes your results stand out from previous work that has studied either activation steering or activation transfer between models?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for appreciating our motivation and providing very helpful comments. We would like to respond to the points raised to address concerns as well as to improve the paper.
1. General Claims: Thank you for pointing this out. See response to reviewer ort7 (paper cleanup).
* General Steering Transfer: In response to Reviewer ort7, we are exploring if this method can transfer diffuse capabilities. Preliminary results are in our response to Reviewer ort7 (Point 2); finalized findings will be put in the camera-ready.
* Switching between model versions: Our revised claim states: “Model version switching is effective for backdoor removal.” We no longer imply general applicability. We plan to evaluate this method on tasks of varying complexity and identify layers best suited for version transfer in the camera-ready version.
* Cross architecture transfer: we clarify that this is effective between models with similar vocabulary spaces—such as Qwen and LLaMA— but less successful between models with divergent vocabularies under our current method. Updated table here (https://imgur.com/a/NBY9qZ3 )
We note that Section 6 (affine transfer) is not a core contribution, but serves as a baseline to highlight the need for non-linear mapping between activations.
2. Evaluation on OOD benchmark:
We observe that mapped models for the Code Vulnerability and “I HATE YOU” tasks degrade on MMLU, as expected since the autoencoder is trained on an OOD dataset. We evaluate on a condensed MMLU set (228 examples, 4 per task), with (for example) an MMLU score of 0.61 for Qwen-2.5-1.5B-Instruct trained on “I HATE YOU”, opposed to a zero MMLU score when mapped activations are patched in from Qwen-2.5-0.5B-Instruct. As MMLU is a multiple-choice task; result is not surprising. We find a lesser degradation on condensed SQuAD, going from 0.604 on Qwen-2.5-1.5B-Instruct trained on “I HATE YOU” to a score of 0.271 with mapped activations.
In contrast, in-distribution mapping on hh-rlhf is better preserved (see our response 3 to Reviewer qxwo), suggesting transfer is more tractable with curated data.
3. References: We thank the reviewer for highlighting these works. We find papers [3, 4, 6, 7, 8] to be very relevant and will cite them.
Papers 1 and 2 learn linear mappings for decoding hidden features into vocabulary space within a single model. In contrast, our focus is on transferring behaviors between models, which is a distinct challenge. Papers 5 and 8 also do not involve cross-model transfers, though 8 is related to backdoors.
Papers 3 and 4 involve linear projections between image and text models for interpretability. In contrast, we aim for scalable behavioral intervention by learning a nonlinear mapping between LLM hidden layers. We find linear mappings insufficient for our task, aligning with Section 6.
Paper 6 uses activation patching to study shared circuits between base and fine-tuned models, but not for transferring behavior. Paper 7 is closely aligned with our fine-tuned-to-base experiments, though it does not train a mapper.
4. Transfer on larger model: See response to reviewer CPRq (scalability)
5. Clarifications:
a) Modified table 1 to include only 3 metrics (LLM-Judge, KL-Div and Coherence)
b) Why 100 samples used for refusal evals: This is to ensure consistency with Arditi et al who evaluate their approach on a sample size of 100. We want to compare our mapped vector to theirs.
c) What positions are the steering vectors applied to: For our steering vector applications, we used different token positions depending on the specific task:
* Backdoor removal (S 4.1): Steering vectors were calculated and applied precisely at the trigger positions within prompts
* Refusal vector experiments: We performed a systematic sweep across the final positions in the sequence and found optimal performance when applying vectors at position -2 (source) and pos -4(target). Similarly, our mapped vectors were applied at all positions in the prompt, following Arditi et al.'s approach.
Most interesting result:
The transfer of refusal vectors across different families, as refusal is a complex task that requires a model to identify dangerous concepts such as bomb making, self harm, etc, while answering a range of requests. There was no fine-tuning of either model, so there is no requirement that the models have been implicitly pre-aligned via training on similar datasets.
Usefulness/toy nature of tasks:
We argue that tasks such as refusal cover a large number of scenarios, so tools that provide researchers new ways to discover steering vectors in a new model are highly useful. For example, [Contextual Noncompliance in Language Models](https://neurips.cc/virtual/2024/poster/97587) introduces a deep taxonomy of refusal scenarios, several of which SOTA models struggle with.
We hope these updates address your concerns and strengthen the paper. Please let us know if further revisions would improve your evaluation. | Summary: This paper investigates the ability to transfer activation-space interventions between models by learning a mapping between two models' activation spaces. Both sparse auto-encoders (SAEs) and affine maps are considered as mapping functions for this purpose. They find that it is possible to effectively map steering vectors constructed via the difference-in-means approach between differently-sized LLMs within the same model family. This is demonstrated in a few different testbeds including refusal steering and eliciting backdoors.
Claims And Evidence: - The abstract mentions "convergence [of representations] across domains, modalities, and architectures". However, little evidence is provided for cross-architecture transfer as mentioned in the paper's Limitations section.
- The evidence for representation transfer between differently sized models from the same family is strong. For example, the results in Figure 4 showing the efficacy of the mapped refusal vector are pretty good.
- The paper mostly explores transferring between differently-sized models from the same family. These models use the same tokenizer and have the same embedding dimension which means that vectors that are closely related to specific tokens (e.g. the backdoor features, which are particularly tied to certain token IDs) and will have the same token-space representation in both models. This could explain some of the transfer efficacy (and not necessarily the similarity of abstract representations). It would be useful if the authors performed some more experiments on highly abstract, not token-specific features (e.g. responding in a humorous fashion). Refusal is somewhere in the middle (it is associated with specific refusal tokens like "I'm sorry" but is indeed triggered by abstract concepts).
Methods And Evaluation Criteria: - The authors evaluate on a number of model families (Llama, Qwen, Gemma) and sizes (0.5B -> 3B). However, I would encourage them to try their experiments on some larger models (e.g. Llama 8B) to get a better sense of trends over scale.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experimental design looks generally valid. The use of the mean-ablated completions as an additional baseline (where the activations are replaced by the mean of that layer's activations) was a good idea.
Supplementary Material: The appendix provided additional information on the steering techniques and included a reasonable exploration of the sleeper agent backdoor brittleness. It also provided a bunch of information on the SAE training setup.
Relation To Broader Scientific Literature: The paper fits into the broader literature on cross-model representation mapping, as described in Section 2 (Background). It also builds on previous work on activation-space interventions, specifically activation steering (Panickssery et al., 2023), (Turner et al., 2023) and ablation (Arditi et al., 2024).
Essential References Not Discussed: This work is quite similar to Lindsey et al., 2024, "Sparse Crosscoders for Cross-Layer Features and Model Diffing", https://transformer-circuits.pub/2024/crosscoders/index.html, which is not cited in the paper. They also train SAEs to map between model representations. I suggest citing this paper and pointing out how your methods and findings differ.
Other Strengths And Weaknesses: As mentioned, should cite Lindsey et al., 2024, "Sparse Crosscoders for Cross-Layer Features and Model Diffing", https://transformer-circuits.pub/2024/crosscoders/index.html
Other Comments Or Suggestions: N/A
Questions For Authors: - In Section 1, you mention "By transferring activations between base and fine-tuned models, we introduce an efficient method to toggle between model versions, reducing the need to store both while maintaining their respective behaviors.". How does this improve upon LoRA? Would be good to mention that LoRA is an alternative approach for this.
- Confused about section 4.1 "removing backdoor". Given that the results seem to be optimizing for a higher trigger rate in the target model, why is this "removing" and not "adding" the backdoor?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging our results (“the evidence for representation transfer between differently sized models from the same family is strong”) and appreciating our experimental design (“The experimental design looks generally valid.”) as well as providing some very helpful comments. We respond to each of the points in order to address the concerns as well as to improve the paper.
1. Cross-architectural transfer: We revise contributions to clarify transfer is effective when vocabulary spaces are similar (e.g., Qwen → LLaMA) ; limitations are now acknowledged. Section 5 has been expanded to better explain the setup and results. See also our response to Reviewer ort7 (Paper cleanup).
2. Transfer of abstract representations: Thank you for noting that abstract features not tied to specific tokens may be out-of-distribution for our mapper. We aim to validate this hypothesis. We believe that steering vectors computed via difference-in-means at a single position tend to remain within-distribution because:
Our mapper reconstructs activations across all positions; poor reconstruction at the final position would degrade generation when replacing full activations.
Validation metrics show ~0.9 average cosine similarity between mapped and target activations across tokens and prompts; we expect this to hold at the final position (at which steering vectors are averaged) as well.
To test this, we plan to:
(i) Compare similarity at the last token vs. the full sequence
(ii) Vary sample sizes (currently 8) and investigate change in average cos sim.
Following your suggestion, we began exploring humor as an abstract feature. We identified humor steering vectors in LLaMA 3.2–3B and 3.1–8B (but not 1B); examples here:
3B: https://pastebin.com/7sNbt0kX
8B: https://pastebin.com/QPLGH2BZ
However, technical issues (incompatibility between resid_pre from TransformerLens and our torch hooks) blocked transfer experiments. Would you find it valuable for us to prioritize resolving this and including humor transfer results in the final version?
3. Scalability (llama8b): We fine tune a Llama 3.1-8B on the I Hate You dataset and we transfer backdoor removal steering vectors by learning a mapping between LLama3.2-3B and Llama 3.1-8B. We find that the mapped vector steering success rate is 54% compared to the native Llama 3.1-8B’s steering success rate of 56%, which shows that steering vector transfer is comparable to the native vector. We note that the text similarity of mapped activations to original completions are 4.4 (very similar according to our rubric) on RLHF data, and the coherence is 3.7 which is the same as the original model completions suggesting a successful transfer (See https://imgur.com/a/Up2f96N).
4. Similarity with Crosscoders: We agree that crosscoders are relevant, as both works aim to find shared latent features across models/layers. We have added this work to our background section.
Key differences:
* Architecture: We use a single encoder-decoder mapping for specific layers; crosscoders use one per layer across multiple layers.
* Representation: Our dense autoencoder supports flexible mappings; crosscoders focus on sparse features. Sparse latent transfer is promising, but may be out-of-distribution for our mapper—left to future work.
* Downstream effects: We preserve downstream behavior (via validation scores) and use our autoencoders for interventions, while SAEs(simplest variant of crosscoders where we reconstruct a single layer’s activations) often yield poor reconstructions and are not used for behavior transfer.
5. Lora baseline for base-FT transfer: We acknowledge that LoRA is a viable alternative for behavior switching and will add to the paper. However, we argue our method offers distinct advantages for the mechanistic interpretability community.
Our approach aligns the activations of the fine-tuned model with those of the base model, potentially enabling reuse of steering vectors, activation additions, and feature interpretations originally developed for the base model. While this full alignment remains a hypothesis in the current work, it offers promising ground for future investigation.
In contrast, LoRA optimizes for output similarity at the logit level and provides no guarantees about alignment in the activation space—making it less amenable to reuse of interpretability tools.
6. Confusion in 4.1: We’ve retitled this section “Replicating Backdoored Models and Removing Them via Steering.” Since Anthropic’s sleeper agent models were not released, we constructed our own backdoored models (first part), then evaluated backdoor removal via steering (second part). The section now includes clarifying text.
We hope these updates address your concerns and strengthen the paper. Please let us know if further revisions would improve your evaluation.
---
Rebuttal Comment 1.1:
Comment: I think these updates will strengthen the paper, I will raise my score to 3
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their helpful feedback and insights throughout the process and for raising their score. We're pleased that our updates have helped strengthen the paper. | Summary: This paper proposes a method to transfer interventions on activations between models. Specifically, they train an autoencoder or a linear map to transfer activations from one LLM to another LLM. This enables transfer of capabilities (jailbreaking) and even data distributions (finetuning vs. base). Moreover, the authors show that they are able to transfer capabilities across architectures.
Claims And Evidence: See strengths and weaknesses
Methods And Evaluation Criteria: See strengths and weaknesses
Theoretical Claims: See strengths and weaknesses
Experimental Designs Or Analyses: See strengths and weaknesses
Supplementary Material: See strengths and weaknesses
Relation To Broader Scientific Literature: See strengths and weaknesses
Essential References Not Discussed: There are several works which suggest that LLM activations are naturally transferable between layers which might be useful to include:
- https://arxiv.org/abs/2401.06102
- https://arxiv.org/abs/2403.10949
- https://arxiv.org/abs/2412.08686
Other Strengths And Weaknesses: # Strengths
- The authors explore whether activations from one model can be grafted onto another model by training a linear map or an autoencoder. The results show that the method is somewhat performant.
# Weaknesses
- The paper is very confusing to read and many experimental details are skipped. I'd be happy to raise my score if the paper was cleaned up so it was readable. See suggestions.
- The results themselves are limited in scope, because the paper is unfortunately written for an audience in AI safety. I think the actual method is somewhat interesting and worth demonstrating more broadly, but the tasks studied are backdoor removal + refusal vector transfer. Can you see if you're able to transfer diffuse capabilities learned from finetuning? E.g., finetune a model on some medical QA data (https://arxiv.org/abs/1909.06146) and then try to transfer the activations to a base model and see if it improves. I would be surprised if this works but still interesting to have the negative result regardless. In general, I think the method can be more broadly framed as something useful for interpretability (cross model surgery) and the current framing makes it seem like something that's only useful for safety.
Other Comments Or Suggestions: The presentation of this paper is somewhat confusing. Specific suggestions:
L273-274: "Next, we train an autoencoder and an affine map to map source model activations to the target model."
- What was the training dataset of the autoencoder? How long was it trained for? Many of these training details are missing throughout the paper and are not deferred to the appendix.
Figure 1 and Figure 2 are very detailed and can be simplified (their layout is also confusing). Please remove most of the arrows and show less actual text in the the boxes. I'm not sure how this can be improved but I'm happy to give more feedback if you share an Imgur link of the proposal. Add less information and cut the figure is the high-level tip.
There are far too many validation metrics, some of which should be deferred to the appendix. At the bottom of page 3, there are 7 metrics. Many of these measure similar concepts and could probably just be condensed to an LLM judge score. This would also help fix Table 1, which is extremely unparseable.
Figure 4 has way too many bars and could be collapsed into a single figure with the llama-guard score before and after.
Section 5 is somewhat interesting but is condensed to a single section. I would expand the discussion of the results more carefully some earlier space has been cut.
Section 7 just appears out of nowhere and feels sparse. I'd just leave it entirely in the appendix and maybe reference it in the results.
Questions For Authors: L249-253 has "However, we found that reducing the magnitude of the mapped steering vector improved performance this case, suggesting that optimal steering magnitudes differ between mapped and native vectors.". Could some numbers be shared on what the magnitude difference is between the optimal steering magnitudes are and what the new performance would be?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging the validity of our work (“The results show that the method is somewhat performant.”) and providing very helpful comments. We respond to each of the points raised in order to address the concerns and improve the paper.
1. Paper cleanup
* Revised contributions: to improve clarity and specificity by grounding each claim in its associated task:
Claim 1 (Representation Transfer): Now explicitly tied to results on backdoor removal and jailbreaking tasks.
Claim 2 (Corrupted Capabilities): Clarified that difference-in-means struggles in this setting; we compare transferred vs. native steering vectors.
Claim 3 (Model Version Switching): Scoped more narrowly to backdoor removal; broader capability transfer is left as future work.
Claim 4 (Cross-Architecture Transfer): Refined to reflect success only when vocabulary spaces are similar; limitations are now acknowledged.
See revised contributions - https://imgur.com/e3dHBjJ
* Section 5 expansion: We expanded Section 5 to better describe cross-architecture transfer. More precisely, we now clarify that our method performs comparably to same-family transfers when source and target models share vocabulary space. Performance degrades when vocabularies diverge. Figures illustrate this trend: (https://imgur.com/a/NBY9qZ3 ).
* Move Section 7 to the appendix: As SAE feature transfer remains work-in-progress, and to balance feedback across reviewers, we moved Section 7 to the appendix. This enabled us to expand on related work and add more implementation details.
* Figure/Table revisions:
(i) Figures 1 & 2: Updated (https://imgur.com/a/kxCvJxM ). Please review.
(ii) Table 1: We show 3 core metrics(LLM-Judge,KL-div and Coherence); others are deferred to the appendix for clarity.
(iii) Figure 4: We moved substring match scores to the appendix.
2. Scope limited to AI Safety Scope: we are enthusiastic about our AI safety results applications because:
(i) understanding common jailbreaking mechanisms among models are important to make models that are more robust and aligned to human values (See for instance [Safety At Scale](https://arxiv.org/abs/2502.05206) [Persistent Pre-Training Poisoning of LLMs](https://arxiv.org/abs/2410.13722))
(ii) The north star aim of this line of research, to facilitate AI safety interventions adapted from a small core set of models, could eventually drive far broader adoption of AI safety techniques in the wild as more practical.
Thank you for suggesting the medical QA use case. Due to time constraints, we used an existing model (Ellbendls/llama-3.2-3b-chat-doctor) and patched the base model’s last-layer residual stream. While responses changed in the patched model, our judge (LLM,human) often couldn't reliably distinguish between even the base and doctor models—even after a few-shot prompt tuning—suggesting limitations in our current evaluation. See example- https://imgur.com/a/gaZLD4v
Given this, we believe our evaluation metric is currently too noisy without a dedicated doctor judge. We plan to finetune the medical model and validate our judge before forming conclusions and run a sweep to identify more effective intervention points.
3. Magnitude sweep for mapped steering: We now include a sweep over magnitudes (https://imgur.com/a/xtaG1lq) comparing native vs. mapped vector effectiveness. We find that steering works well in the 0–4 range, but performance drops at magnitude 5. Since Turner's prompt steering is sensitive to magnitude and our mapping isn’t magnitude-preserving, high magnitudes may cause mapped vectors to degrade faster than native ones. We also include a noise steering rate for reference.
4. Missing relevant works :We agree that Patchscopes is highly relevant and have now cited it. While their work uses affine mappings to explain smaller model representations using larger ones, our focus is on scaling interventions from small to large models. Unlike their linear approach, we use a non-linear mapper, which we found to perform better. SelfIE relies on weight editing and supervised control, whereas our method preserves model weights and uses activation-based interventions. LatentQAemploys gradient-based control via trained decoders, in contrast to our simpler approach of identifying and transferring steering vectors.
5. Missing autoencoder training details: We now clarify in 4.2 that the autoencoder is trained on each task’s training split and evaluated on the test split (details in D.2). Training lasts 3 epochs. Task-specific dataset sizes(train-test) are now added explicitly:
I Hate You: Train = 86.7K, Test = 9.64K
Code Vulnerability: 126K, 11.7K
Corrupted Capabilities: 19.2K, 2.4K
Refusal Vector: JailbreakBench (Chao et al., 2024) + WildGuardMix (Han et al., 2024)
FT-to-Base Toggle: I Hate You Dataset (details in Appendix D.3.1) : 86.7k, 9.64k
We hope these updates address your concerns and strengthen the paper. Please let us know if further revisions would improve your evaluation.
---
Rebuttal Comment 1.1:
Comment: - The figures are much more clear now, thanks
- I still think all three related works should be cited as they are all based on similar ideas.
- I'm happier with the changes now and I think the paper is probably overall a 2.5, so happy to raise my score and let the AC decide. (I'm overall ambivalent as I think now the paper is mostly well-executed but is hard to build on and somewhat limited in scope.)
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their helpful feedback throughout this process and for raising their score. We're pleased that our clarifications have improved the paper.
We will incorporate the three recommended citations in our final version and discuss how our work relates to these findings.
While we appreciate the suggestion to frame our work more broadly, we respectfully maintain that focusing on AI safety applications was a deliberate choice reflecting our research priorities rather than a limitation of the methodology. Papers focused solely on AI safety are considered important enough in the broader AI community to be frequently published in major conferences (see e.g. [1](https://proceedings.neurips.cc/paper_files/paper/2024/hash/fb3ad59a84799bfb8d700e56d19c231b-Abstract-Conference.html), [2](https://neurips.cc/virtual/2024/106803), [3](https://neurips.cc/virtual/2024/poster/96876), [4](https://openreview.net/forum?id=e9yfCY7Q3U), [5](https://openreview.net/forum?id=h0Ak8A5yqw)). We contend that AI safety methods “could help prevent catastrophic outcomes as AI systems become more powerful and inscrutable” [[6](https://openreview.net/forum?id=ePUVetPKu6)], and thus are important in their own right.
Our work offers several concrete contributions with clear paths for extension:
- Demonstrating weak-to-strong generalization by transferring interventions from smaller to larger models.
- Enabling efficient transfer of SAE features across architectures, transferring interpretability of features.
- Providing a first-of-a-kind approach to cross-model backdoor mitigation.
- Establishing a foundation for broader transfer applications including across text to image models, and transfer of probes across architectures
We believe these contributions and future directions demonstrate both the significance and extensibility of our work, even within our chosen focus area.
Thank you again for your time and thorough consideration throughout the review process.
1. Analysing the Generalisation and Reliability of Steering Vectors
2. What Features in Prompts Jailbreak LLMs? Investigating the Mechanisms Behind Attacks
3. Truth is Universal: Robust Detection of Lies in LLMs
4. Improved Techniques for Optimization-Based Jailbreaking on Large Language Models
5. On the Role of Attention Heads in Large Language Model Safety
6. Mechanistic Interpretability for AI Safety - A Review | null | null | null | null | null | null |
Efficient Bisection Projection to Ensure Neural-Network Solution Feasibility for Optimization over General Set | Accept (poster) | Summary: The paper introduces a bisection procedure to achieve feasibility of solution outputs from neural networks applied to constrained optimization problems.
## update after rebuttal:
I updated my recommendation following the discussion period.
Claims And Evidence: The paper provides a bisection procedure, establishes its claims, and presents numerical results to support its contribution.
Methods And Evaluation Criteria: The method comprises two elements: a bisection procedure and an IPNN training.
The bisection procedure is very simple and relies on basic ideas -- between a feasible point in the interior and an infeasible point there is a line in which there are points that are feasible so searching on that line can yield a closer feasible point.
IPNN is a neural network to predict interior points by minimizing some loss function with uniform sampling and Adam. Overall this is a heuristic method.
The evaluation measure is the distance between the optimal solution of the problem and the feasible point obtained from the infeasible solution output of the NN. This measure is not sufficiently motivated considering that global optimal solution of a nonconvex problem is unattainable and it requires some interior point reference.
Additionally, the bound established for this measure is not meaningful and seems quite straightforward.
Theoretical Claims: The theoretical claims mainly follow from simple arguments and the underlying assumptions.
I briefly verified their correctness.
Experimental Designs Or Analyses: I only checked the theoretical analysis and methodology.
Supplementary Material: I reviewed part related to the theoretical results and definitions.
Relation To Broader Scientific Literature: The key contributions are related to implementation and utilization of NNs as optimization tools for constrained optimization, making them, in my opinion, less relevant to research related literature.
Essential References Not Discussed: no
Other Strengths And Weaknesses: The paper proposes a technical procedure to tackle feasibility issues arsing the usage of NNs to solve constrained optimization problems.
It uses straightforward technical solutions with simple theoretical analysis.
Other Comments Or Suggestions: no
Questions For Authors: no
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **Response**:
Thank you for reviewing our paper. We appreciate your feedback and the chance to address your concerns.
We believe our application of the bisection procedure to neural networks, while simple, offers a practical solution to feasibility challenges in constrained optimization. The simplicity is intentional and enables broader implementation.
Regarding your concerns about our evaluation metrics, we believe there may be a misunderstanding about our methodology. Our approach does not require global optimal solutions of nonconvex problems but rather focuses on improving feasibility while maintaining initial NN solution quality — a practical consideration for real-world applications of neural networks in optimization.
Regarding our work's relevance within the research landscape, our method contributes to the advancement of machine learning approaches for solution generation in time-sensitive applications, a value that other reviewers have also acknowledged.
In our detailed response below, we address your specific points. We also clarify the novelty, significance, and theoretical foundations of our work. We welcome your additional feedback and are committed to addressing your further comments in the following discussion period and improving both this work and our future research.
We genuinely appreciate the time you and other reviewers have invested in evaluating our submission.
---
> `C1: The bisection procedure is very simple and relies on basic ideas. IPNN is an NN to predict IPs by minimizing some loss function with uniform sampling and Adam. Overall, this is a heuristic method.`
---
Thank you for your assessment. While our approach uses fundamental principles, simplicity is advantageous when effectively solving complex problems. Our contributions are:
- Addressing research gaps in identifying quality interior points (IP) for general constraint sets and developing real-time IP prediction capabilities
- A novel loss function design specifically targeting low-eccentricity interior points to minimize projection-induced optimality gaps
- Theoretical analysis providing optimality bounds (Prop. 4.1, Theorem 1) and feasibility conditions (Prop. 5.1), establishing our method's mathematical foundation beyond mere heuristics
---
> `C2: The evaluation measure is not sufficiently motivated considering that the global optimal solution of a nonconvex problem is unattainable, and it requires some interior point reference.`
---
We acknowledge that global optimal solutions for general nonconvex problems are typically not guaranteed. However, our approach does not require global optimal solutions of nonconvex problems, but rather focuses on improving feasibility while maintaining initial NN solution quality — a practical consideration for real-world applications of neural networks in optimization.
Moreover, our experiments evaluate solution quality by comparing objective values against state-of-the-art iterative solvers, providing direct empirical validation.
---
> `C3: Additionally, the bound established for this measure is not meaningful and seems quite straightforward.`
---
The bounds provided in Theorem 1 offer practical guidance for minimizing the overall optimality gaps in our framework. Its value lies in its decomposition of the error sources:
- Initial prediction errors, which are well addressed by existing NN-based approaches;
- Projection errors, which are the primary focus of our work;
- Finite step bisection errors, which converge exponentially, demonstrating the efficiency of our framework.
This decomposition not only clarifies the theoretical foundations of our approach but also provides insights for algorithm design and optimization. By identifying the projection error as a critical component that previous works have overlooked, we establish a clear direction for improving the performance of our framework.
---
> `C4: The key contributions ..., in my opinion, less relevant to research-related literature.`
---
We respectfully disagree with this assessment.
- Neural networks for accelerating optimization problems represent an active and growing research area with significant interest from both machine learning and optimization communities, as evidenced by recent publications in top-tier venues, including ICML, NeurIPS, ICLR, and JMLR (see detailed discussions in Related Work Section.)
- Our work addresses a fundamental challenge in this domain - ensuring the feasibility of NN predictions for constrained optimization - which has been identified as a key limitation in prior work. By providing a solution with theoretical guarantees, our paper makes a meaningful contribution to this important research direction.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detail rebuttal.
I agree that the paper might provide contribution of interest to the ML community.
However, IMHO this contribution is of a technical nature that is less suited for the ICML venue.
Therefore I keep my recommendation as is.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 8qqe,
Thank you for acknowledging our work's potential contribution to the ML community.
We respect your assessment regarding the technical nature of our contribution.
We also believe that ICML has historically welcomed a diverse range of papers spanning the theoretical-practical spectrum and many technical advances presented at ICML have later enabled applied breakthroughs.
Regardless of the final decision, we appreciate your engagement with our work and will take your feedback into consideration in future works.
Sincerely,
Authors | Summary: One of the main challenges of ML-based solution generation in constrained optimization is to ensure feasibility. This paper describes a method to produce feasible solutions that are close to ML-generated potentially-infeasible solutions for compact constraint sets with non-empty interiors, with a focus on efficiency. Like in typical ML-based solution generation, we assume that the constraint set is parameterized and we have training data over this parameter. The general idea is to learn an interior point with low eccentricity -- though in the implementation this is approximated with learning the Chebyshev center -- and use a standard bisection method between this interior point and the predicted solution to find a feasible solution close to the predicted one. This is supported by theoretical results that bound the projection distance and optimality gap based on the eccentricity of the interior point and other parameters. Computational results demonstrate that this method is fast relative to baselines with little loss to the optimality gap in most cases.
### Update after rebuttal
I maintain my review after rebuttals and discussions. I appreciate the authors making the improvements discussed in the rebuttals. I do not have further comments to add.
Claims And Evidence: The main claim of the paper is that the bisection projection method performs well for purposes of projecting a solution to a given feasible set. This is well-supported with a theoretical base and strong computational experiments. According to the experiments, this method provides feasible solutions with objective value generally comparable to the one obtained by orthogonal projection, which is the feasible solution closest to the predicted one, except that it is orders of magnitude faster. This type of computational result is valuable to make ML-based solution generation more practical.
Despite the strong end-to-end results, in my opinion there is a gap in the experimental evidence: the paper proposes the prediction of an interior point to produce the projected point, but does not evaluate this predictor, only the end-to-end method. I am interested in seeing if this approach is able to predict the actual Chebyshev center, or produce points with low eccentricity. One could compute the Chebyshev centers of these sets and compare the results. While it is perfectly valid to argue that obtaining an accurate approximation to a minimum-eccentricity point is not too important if the method works well end-to-end, I would say that such experiments would help a reader validate the theoretical reasoning behind this algorithm and ensure that each step of the way is working well. That said, although this gap is a weakness of the paper, I would not say it is a major weakness in view of the end-to-end results.
Methods And Evaluation Criteria: The proposed method is clean and reasonable. While I would not consider the ideas in this paper to be very sophisticated, this relative simplicity is also what makes it appealing for practical use if one already has the framework for ML solution prediction. I am not well-versed in this line of work, but based on a quick literature search, this method appears to be novel. Furthermore, I believe that the impact of this method is fairly significant. Having a fast projection method like this one facilitates the use of ML-based solution generation methods in applications with strict latency requirements, particularly given how general this approach is (albeit limited to continuous optimization).
Aside from the experimental gap discussed above, the end-to-end experimental section is solid and comprehensive, though there is some room for improvement on the presentation of the results. The baselines are all appropriate, there is a good variety of applications, and the sensitivity analysis experiments provide additional insight. My concerns are generally minor and I put them in the Questions section below.
Theoretical Claims: The theoretical results support the effectiveness of the method, particularly showing that you can control the quality of the projection if you use an interior point of low eccentricity. I read through the theoretical proofs in the appendix superficially and did not check them in detail.
Experimental Designs Or Analyses: The experimental setup and analyses are generally solid with some room for improvement; more details are in the other sections.
Supplementary Material: I focused on the main text and read through the appendix at a more superficial level.
Relation To Broader Scientific Literature: While I am not fully familiar with the literature of projection methods in this context, this paper does a good job at summarizing work in this area. In particular, the paper puts this method into context of previous works in Proposition 5.2.
Essential References Not Discussed: I am not aware of other references that need to be included in the paper.
Other Strengths And Weaknesses: All strengths and weaknesses are discussed in other sections.
Other Comments Or Suggestions: Typos:
* Line 241: Missing closing braces.
* Prop. 5.2: Repeated "to to".
* Equation (44) is missing a subscript on \theta.
Editorial issues:
* Could you please add an explicit definition of g, if I have not missed it?
* I assume that Objective Gap (%) is averaged over the testing instances. Could you please mention it explicitly?
* The abstract mentions ensuring "NN feasibility". This was confusing at first read, as you just want to ensure solution feasibility, independently of the NN. I would suggest removing the "NN" from this expression.
Questions For Authors: 1. Please address the concerns that were discussed above.
2. I understand that speedup is relevant for comparing methods, but please include the actual compute time as well. This can be difficult to evaluate without absolute numbers for context.
3. Could you comment on empirical robustness on the quality of the objective? While I see that the average optimality gaps are good, I am interested in knowing how often this approach might produce solutions that end up with poor solution quality. Or does it always produce good solutions? If there is anything that you could add to the paper on this topic, that would be great.
4. Could you add some rough estimates on training times?
5. At the end of p.7, the paper mentions that GPU-based processing accelerates constraint checking. Could you please clarify in the paper whether you are using GPUs for constraint checking in your computational experiments, or only for the neural network training and inference?
6. Could you please add a clarification in Fig. 4 if the 0-th step is the midpoint or the interior point? If it is the interior point, it seems that the predicted AC-OPF interior point is not bad in terms of objective value; can you comment on why?
7. If I understand correctly, the bisection method could converge to a point that is far from the NN solution if there are multiple options. For example, in Figure 2, the resulting point could lie in the boundary of the hole if the bisection point happens to fall inside it. This is of course only relevant for non-convex sets, but did you observe such a scenario occurring the applications you investigated? Is there no mechanism to avoid it? I imagine this might not be a significant issue for practical applications, but I am curious whether it could be.
8. It was not clear to me if the prediction-aware variant of this method was actually evaluated. I believe that the main results are for the prediction-agnostic variant, correct? Have you performed any evalutions for the prediction-aware version? The idea of learning the two points jointly is seems promising and I am curious if it performs well.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the recognition of our method's practical value for ML-based solution generation in constrained optimization problems, its theoretical foundation, and strong computational results. Below, we address specific points raised in the review.
---
> `C1: Ability to predict low-eccentricity points/Chebyshev centers`
---
We've conducted additional experiments to verify that our IPNN indeed produces approximated Chebyshev centers with low eccentricity. As shown in **Fig. 1-2** ([Link](https://anonymous.4open.science/r/Rebuttal_ICML_25-82D4/Reviewer%20h2io.pdf)),
- During IPNN training: the penalty loss for constraint violation decreases and the robust margin $\gamma$ increases. Thus, the minimum point-to-boundary distance increases and the eccentricity decreases
- After training, given new inputs, IPNN generalizes well to unseen input parameters, consistently producing central interior points.
For high-dimensional problems in our main experiments, directly measuring eccentricity is challenging due to difficulties in sampling boundary points, which is why we focused on end-to-end performance metrics.
---
> `C2: Definition of g(·)`
---
$g(x,\theta)=[g_1(x,\theta),\cdots,g_{n_{\rm ineq}}(x,\theta)]$ refers to the constraint function for inequality constraints, as noted in Appendix A. We'll add this formal definition to the problem formulation section.
---
> `C3: Objective Gap calculation.`
---
Yes, the Objective Gap (\%) is averaged over all testing instances. We'll clarify this in the experimental section.
---
> `C4: Solution feasibility beyond NN applications`
---
While we focus on ensuring NN solution feasibility for constrained optimization, we acknowledge the broader applicability of our approach. We'll discuss generalizations to other problems in the revised manuscript.
---
> `Q1: Actual baseline computation time and IPNN training time`
---
We provide the computation time for baseline iterative solvers shown in **Table 1**, and the IPNN training time for several optimization problems in **Table 2** via [Link](https://anonymous.4open.science/r/Rebuttal_ICML_25-82D4/Reviewer%20h2io.pdf). The training time ranges from 30 seconds to 8 minutes.
---
> `Q2: empirical robustness and possible failure cases`
---
Based on Prop. 4.1, our method can produce significant optimality gaps in three scenarios:
- (a) "Thin" constraint sets with unavoidably large eccentricity
- (b) Poorly selected interior points (e.g., near boundaries) causing large eccentricity
- (c) Large initial NN prediction errors
We provide **Fig. 3** ([Link](https://anonymous.4open.science/r/Rebuttal_ICML_25-82D4/Reviewer%20h2io.pdf)) to illustrate these cases, which we'll include in our revised manuscript.
---
> `Q3: GPU usage for constraint checking`
---
Yes, we use GPU for all NN-based methods and related processing (e.g., constraint checking in B-Proj and gradient evaluation in D-Proj) for fair comparison.
---
> `Q4: 0-th step in bisection and AC-OPF objective gap`
---
The 0-th step returns the interior point.
Regarding AC-OPF's objective gap: 2.5\% gap (at the 0-th step) is significant in power systems because:
- Its objective represents generation cost for active power (a subset of all decision variables)
- Other variables primarily maintain physical power balance constraints, not in the objective
- In the power systems community [1,2], such gaps are meaningful as base generation costs are typically high (e.g., $260,200 for a 793-node network [2])
[1] P., X., ... Deepopf: A DNN approach for security-constrained dc optimal power flow. IEEE TPS, 36(3). 2020
[2] B., S., ... The power grid library for benchmarking ACOPF algorithms. arXiv. 2019
---
> `Q5: Bisection convergence with multiple intersections`
---
The bisection algorithm can converge to one of multiple intersection points when they exist (Sec. 4.1). Although we didn't observe this in our non-convex engineering problems, we acknowledge this possibility.
To prevent convergence to points far from the initial prediction, we can adjust the bisection step size:
- Standard bisection uses step size 0.5: $a_m = (a_l+a_u)/2$
- Using larger step sizes (e.g., 0.75): $a_m = 0.25a_l+0.75a_u$ creates a trajectory starting closer to the initial point.
- This approach promotes convergence to boundary points nearer to the original prediction.
**Fig. 4** ([Link](https://anonymous.4open.science/r/Rebuttal_ICML_25-82D4/Reviewer%20h2io.pdf)) demonstrates that adjusting the step size from 0.5 to 0.75 helps avoid convergence to distant points.
---
> `Q6: Prediction-aware variant evaluation`
---
We only evaluated the prediction-agnostic version because it doesn't require optimal solution training data and already achieves small optimality gaps. Future work will explore prediction-aware variants for applications where objective values are highly sensitive to decision variables or where NN predictors struggle with initial approximations.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. I have read through the responses and they generally address my questions. I only have comments on a few of them:
* **C1:** Thank you for running these experiments. I was somewhat hoping to see an evaluation over the benchmark set, but this is also fine. Please add this to the Appendix of the paper. (Note: Fix subfigure references in Fig. 1.)
* **Q1:** Thank you for the table. Just to make sure I understand this: for example, the QP solver takes on average 62ms, and your method is 2193x faster, so it takes 0.028ms? These seem to be very low solve times so I am wondering if I am misinterpreting this. In any case, if you find space, I suggest adding the data from Table 1 to the main text (rather than Appendix), since I believe this is important contextual information. Table 2 is fine as part of the Appendix.
* **Q2:** These figures are interesting to understand failure cases and I appreciate you adding them to the Appendix. I just wanted to comment that my original intent was understanding whether such failure cases occur with your method, i.e. how often the method produces low quality outliers. This is actually addressed by your Fig. 1 to reviewer cZEk. In particular, adding the margin seems to make your method much more robust, which is a positive sign.
Please make sure to add all these details to the paper. I particularly liked seeing the results of the ablation study on $\gamma$ suggested by Reviewer cZEk. I maintain my assessment of accept.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer h2io,
Thank you for your thoughtful feedback and for maintaining your acceptance recommendation.
We appreciate your careful review and have addressed your comments as follows:
- Response to **C1**: We appreciate your interest in our additional experiments. We will incorporate these results in the Appendix and fix the subfigure references.
- Response to **Q1**: Your interpretation of our results is correct.
- The QP solver (MOSEK) takes **63s in total** with full parallelization and 62ms per instance (divided by the number of instances), while our method takes approximately **29 ms in total** (0.028ms per instance).
- These remarkably low solve times result from efficient GPU batching processes, highlighting a key advantage of neural network-based approaches [1].
We agree this is important contextual information and will move the actual **total/average solving times** to the performance table in the main text of the revised manuscript.
- Response to **Q2**: We're pleased you found the failure case analysis helpful. Your observation about Fig. 1 demonstrating improved robustness with margin inclusion is indeed correct. This confirms that our method effectively reduces low-quality outliers, addressing the core concern in your original question.
We will incorporate all these details and additional experimental results in the revised manuscript, including the ablation study on $\gamma$ suggested by Reviewer cZEk.
Thank you again for your constructive feedback throughout the review process.
Sincerely,
Authors
[1] Donti, P. L., Rolnick, D., & Kolter, J. Z. DC3: A learning method for optimization with hard constraints. ICLR 2021. | Summary: The paper proposes a technique that enables projecting the outputs of neural nets onto arbitrary compact sets. The goal is for the neural net outputs to be feasible with respect to constraints that are often found in convex and non-convex optimization settings. The approach adopted relies on an efficient bisection projection.
The key idea is that if we want to project to a specific set, if we have access to an interior point of that set, given a neural net prediction outside that set, we can consider the line segment from the interior point to the prediction and move along this direction until the boundary is intersected. In order to improve the efficiency of the method, the paper aims to use interior points that have low eccentricity, which basically implies that the point is more 'central' in the feasible set.
Finding such points is generally a hard problem and the authors resort to a relaxation that uses the Chebyshev center which finds a points that maximize the minimum distance from the boundary. To further improve efficiency, they train a neural network for this task using a suitable loss and provide sufficient conditions for this training to yield a NN that produces feasible interior points.
The method is evaluated on various convex and nonconvex problems and it consistently yields the fastest results with guaranteed feasibility and competitive objective values.
Claims And Evidence: The claims are generally well supported and the paper provides clear explanations and proofs for its claims.
- My main issue has to do with the use of eccentricity. On a formal level, since there are no assumptions on the function $f$ other than continuity, does minimizing the projection distance matter? The function can be nonconvex (it doesn't even have to be smooth), so unless the neural net prediction is already extremely close to the optimal point, the value of the objective could change significantly after projecting. I would expect the projection distance minimization coupled with some kind of smoothness assumption on $f$ to be more meaningful, or have I misunderstood something?
- Conceptually, it would be nice to provide some examples of important problems where H-proj can't be applied while B-proj can. This will help emphasize how the generality of this work yields important benefits.
Methods And Evaluation Criteria: The proposed evaluations and methods make sense for the problem and are in line with the literature.
Theoretical Claims: I have checked the claims and have briefly gone over the proofs. They seem correct.
Experimental Designs Or Analyses: - I think the use of eccentricity is not well supported by the experiments. In figure 3, we see that with or without it, a certain number of samples is necessary to achieve 100% feasibility. In those plots, at least in terms of feasibility, it doesn't seem to make a difference. I believe a more interesting experiment is an ablation on $\gamma$. Namely, what do the objective gaps look like when training the IP predictor without it? Is it necessary to deal with this issue or could any interior point work just as fine with this approach?
I can see the appeal around the eccentricity argument though I can't help but wonder how it's ever 'cashed out' experimentally. It would be good to see a more careful experimental evaluation of its contribution.
Supplementary Material: I have read parts of the appendix, including the proofs.
Relation To Broader Scientific Literature: This paper provides a quite fast and general method for ensuring the feasibility of neural net outputs. Based on the experimental results, this paper achieves the best tradeoff between speed and optimality of solutions.
Essential References Not Discussed: There is a recent paper that handles linear constraints that I believe is not mentioned in the related work. See also references in that paper and its table 1.
Zeng, Hongtai, et al. "GLinSAT: The General Linear Satisfiability Neural Network Layer By Accelerated Gradient Descent." arXiv preprint arXiv:2409.17500 (2024).
Other Strengths And Weaknesses: Overall, I think this approach balances efficiency and optimality well while improving significantly over previous work. At its core, the approach is fairly simple which makes it even more appealing. The paper is nicely written and it frames its contribution well in the context of related work.
On the other hand, to reiterate my main issue: I think the use of eccentricity (or a proxy for it anyway) should be better motivated and the experiments should also reflect its utility since a significant portion of the paper is spent discussing this. I can see at an intuitive level why you would like interior points with better 'centrality' and it helps minimize projection distance but, unless I misunderstood something, I don't think (as I explained above) a particularly compelling formal argument is made in the paper for its benefits.
In any case, I lean towards accepting this so I start with a tentative score and I will reconsider once the authors respond to the points/questions I brought up.
Other Comments Or Suggestions: n/a
Questions For Authors: - Can you explain how you encode the SDP feasibility for the NN that predicts interior points? How do you enforce the PSD constraint on the matrix variable, i.e., what does the loss function look like in that case.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the recognition that our approach balances efficiency and optimality while improving over previous work, and that the simplicity of our method adds to its appeal. Below, we address each specific point raised in the review.
---
> `C1: Assumptions on the objective function and optimality gap`
---
The reviewer raises an excellent point about the relationship between solution distance and objective gap.
- We use solution distance as our primary metric for theoretical analysis, where the optimality gap is bounded by initial prediction error and projection distance. For **Lipschitz** objective functions, solution distance directly translates to bounds on objective value gaps, as demonstrated in our experiments with linear/quadratic objectives.
We acknowledge that for general non-smooth objectives, small solution distances might still yield large objective gaps. In our revision, we will clarify this relationship and specify the applicable scenarios of objective functions.
---
> `C2: examples where H-proj can't be applied while B-proj can.`
---
The H-Proj method [1] is limited by its ball-homeomorphic assumption. Our B-proj method can handle several practically relevant constraints that H-proj cannot:
- Non-simply-connected feasible regions (with "holes" or disconnected components) occurring in AC Optimal Power Flow problems [2] and rocket landing problems with exclusion zones [3].
- We provide a visual example in **Fig. 2** ([Link](https://anonymous.4open.science/r/Rebuttal_ICML_25-82D4/Reviewer%20cZEk.pdf)) showing where H-proj incurs significant optimality loss while our method succeeds on such topologically complex regions
[1] L., C. L. Homeomorphic Projection to Ensure Neural-Network Solution Feasibility for Constrained Optimization. Journal of Machine Learning Research, 2024.
[2] M. H... A survey of relaxations and approximations of the power flow equations. Foundations and Trends® in Electric Energy Systems, 2019
[3] A. C. B. Lossless convexification of nonconvex control bound and pointing constraints of the soft landing optimal control problem. IEEE TCST, 2013
---
> `C3: ablation stufy of $\gamma$ on optimality gap`
---
We appreciate this valuable feedback. We would like to clarify that:
- Fig 3 validates Prop. 5.1, showing that robust margin $\gamma$ reduces the number of training samples needed for feasibility under unseen inputs. With the same training sample, IPNN trained with $\gamma$ consistently achieves better out-of-sample feasibility rates.
- We agree that directly evaluating objective gaps would better demonstrate our eccentricity-minimizing targets' effectiveness. We've provided additional results in **Figure 1** ([Link](https://anonymous.4open.science/r/Rebuttal_ICML_25-82D4/Reviewer%20cZEk.pdf)) showing how eccentricity minimization specifically improves solution (worst-case gap: 12\% (no $\gamma$) vs 5\% (with $\gamma$) in QP instance).
---
> `C4: recent paper on linear constraints (GLinSAT)`
---
It presents an efficient differentiable projection layer for general linear constraints. We will cite this work, along with the related works from their Table 1, and discuss their relevance in our revised manuscript.
---
> `C5: eccentricity should be better motivated. formal argument for benefits eccentricity`
---
Thanks for your feedback on the formal argument of the benefit of eccentricity minimization.
Our theoretical analysis formally establishes the benefits of eccentricity minimization:
- Prop. 4.1 shows the relationship between eccentricity and projection-induced distance, motivating our search for low-eccentricity interior points.
- Prop. 5.2 demonstrates how robust margin reduces training samples needed to achieve feasibility guarantees on unseen inputs
We'll further clarify the motivations of eccentricity with illustrations (**Fig. 3** ([Link](https://anonymous.4open.science/r/Rebuttal_ICML_25-82D4/Reviewer%20cZEk.pdf)) and explicitly present formal arguments for eccentricity minimization in bullet points in revised Section 4.2, reinforcing these points in our analysis.
---
> `Q1: SDP feasibility and how to enforce the PSD constraint`
---
For SDP problems (linear equality and PSD inequality):
- IPNN predicts independent variables and reconstructs dependent ones using equality constraints (Appendix A), then reshapes the vector prediction into matrix form.
- For inequality constraints, we use penalty methods to minimize constraint violation.
For PSD constraint:
- We use the matrix-sketching [1] to capture constraint violations by $v^TXv$ in a differentiable way, where $v$ is iteratively calculated.
- By maximizing $v^TXv$ when negative, IPNN learns to find interior points for PSD constraints.
- During bisection projection, we directly check PSD feasibility using torch.linalg.cholesky_ex()
[1] N., D., S., W., W., D. P. Testing positive semidefiniteness using linear measurements. IEEE FOCS. 2022
---
Rebuttal Comment 1.1:
Comment: OK nice, please make sure you specify those details about the PSD constraint in the paper. In fact, I think highlighting the need for (essentially) a separation oracle and discussing the potential cost of that is essential in this approach. For example it is clear that you tested on small SDPs and my guess is that it has to do with the problem of checking feasibility fast for large instances there. Your assumptions rule out constraints that show up in discrete optimization problems (and you do mention that plus the concern about the cost of evaluating the feasibility of a point) but I think this is an important limitation of the method that should be given room for discussion in order to also encourage future work.
On the other hand, I can imagine that modifications of this method could potentially provide a possible direction for the discrete case too and the idea considerably simplifies previous projection methods which I find quite appealing. In my view this is a nice paper with a simple idea that works quite well so I recommend acceptance
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer cZEk,
Thank you for your thoughtful feedback and recommendation.
We appreciate your insights regarding the PSD constraint and separation oracle aspects.
- We will ensure the final version includes all details about the PSD constraint implementation and the additional experiments presented in the rebuttal.
- As suggested, we will incorporate a discussion highlighting the necessity of a separation oracle for feasibility checking and analyze its computational complexity across various constraint sets, with particular emphasis on SDP problems.
- We acknowledge the limitations of current methods that focus primarily on continuous domains, and will discuss the potential of our framework for discrete/combinatorial optimization as a significant future direction (e.g., discussing how applying SDP can serve as a continuous relaxation for combinatorial problems).
Thank you again for your constructive feedback throughout the review process.
Sincerely, Authors | null | null | null | null | null | null | null | null |
PyTDC: A multimodal machine learning training, evaluation, and inference platform for biomedical foundation models | Accept (poster) | Summary: This paper describes PyTDC, an open-source software platform for training, evaluation, and use of biological foundation model. This provides API access to multimodal biological data-sets, a variety of tasks and associated metrics, and APIs for model retrieval and deployment. The utility of this system is demonstrated through the comparison of different models on a context-aware drug target prediction task.
Claims And Evidence: This paper is largely descriptive, rather than making specific scientific claims and providing evidence for them. The stated characteristics and utility of the system is evidenced through high-level diagrams and an example application.
Methods And Evaluation Criteria: This work does not present novel methods or evaluation criteria, rather it describes a system for evaluating existing methods under existing evaluation criteria.
Theoretical Claims: There are no theoretical claims in this work
Experimental Designs Or Analyses: The evaluation experiment describe is valid and explained clearly.
Supplementary Material: I did not review the supplementary material
Relation To Broader Scientific Literature: The paper describes a system which looks to be of general use, however I question whether this is the right venue. It would be more suited to a systems-focused workshop, e.g., MLSys.
Essential References Not Discussed: The relevant related works are discussed, as far as I can tell
Other Strengths And Weaknesses: The paper describes a useful system, which provides a basis for further development and optimisation of multimodal biological foundation models. This should be of general interest to the ICML community and provide a foundation for the development of biological foundation models.
Other Comments Or Suggestions: No other comments
Questions For Authors: No questions
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Reviewer ckR1,
Thank you very much for the time dedicated to reading the paper and providing your review. Addressing your points below:
--
"The paper describes a useful system. The key weakness is a lack of scientific novelty, which makes it more suited to a systems-focused conference or workshop." and "The paper describes a system which looks to be of general use, however I question whether this is the right venue. It would be more suited to a systems-focused workshop, e.g., MLSys."
-> The ICML 2025 Call for papers lists the following as of interest for the main conference: Machine Learning Systems (improved implementation and scalability, hardware, libraries, distributed methods, etc.). Our paper is inline with this interest.
-> The ICML 2025 Call for papers lists the following as of interest for the main conference: Application-Driven Machine Learning (innovative techniques, problems, and datasets that are of interest to the machine learning community and driven by the needs of end-users in applications such as healthcare, physical sciences, biosciences, social sciences, sustainability and climate, etc.). Our paper is inline with this interest.
-> The paper establishes novelty on the fronts of model evaluation criterion, datasets, task definitions, and machine learning system design.
-> As an example of precedent for ICML publication, our paper cites the ICML paper (Kahn et al., 2022) . It is also an ML systems paper and is a good example we'll cite for some of your other expressed concerns : https://proceedings.mlr.press/v162/kahn22a/kahn22a.pdf
--
"This paper is largely descriptive, rather than making specific scientific claims and providing evidence for them. The stated characteristics and utility of the system is evidenced through high-level diagrams and an example application."
-> This is inline with MLSys publications at ICML such as (Kahn et al., 2022) https://proceedings.mlr.press/v162/kahn22a/kahn22a.pdf
-> Papers presenting new software (models, algorithms, systems, etc) are necessarily descriptive, including those published at ICML. To illustrate this, an ICML paper chosen at random from last year is also largely descriptive with results being showed starting on page 6. https://openreview.net/pdf?id=HsseRq2FAx
--
"This work does not present novel methods or evaluation criteria, rather it describes a system for evaluating existing methods under existing evaluation criteria."
-> The comparisons in Table 1 demonstrate the significant novel contribution of PyTDC.
-> The novel dataset collection, benchmark organization, metric design, and API architecture are well-executed and demonstrate the platform's utility.
-> The proposed methods and evaluation criteria seem appropriate for the problem domain. The datasets are well-curated across diverse modalities, and the evaluation metrics appear aligned with both machine learning performance assessment and domain-specific biological relevance, such as out-of-distribution generalizability across cell types.
--
Overall: This review is correct in stating we don't emphasize novel theoretical claims. However, it appears to over-generalize this and argue the work does not present novelty aligned with ICML interests. In this rebuttal, we re-emphasize novelty and how it aligns with ICML Call for papers.
--
In light of this, we kindly request the reviewer re-evaluate their claims and recommendation, and share any further reservations. A revision of the claims made in the review, given inconsistency with ICML criteria and precedent as well as with the content of the paper, and, consequently, the overall recommendation, should be considered. | Summary: This paper introduces PyTDC, an open-source machine learning platform designed to support the training, evaluation, and inference of multimodal biological AI models, with a strong focus on the single-cell domain. PyTDC facilitates integration of diverse biological data sources from single-cell gene expression, perturbation responses, protein-peptide interactions, and clinical trial data into contextualized machine learning tasks. Key tasks include single-cell drug-target nomination, perturbation response prediction, and cell-type-specific protein-peptide interaction prediction.
A central contribution is the adoption of an API-first architecture, implemented using a Model-View-Controller (MVC) design pattern. This architectural choice allows seamless integration of heterogeneous and continually updated data sources.
The paper also introduces a benchmarking and model retrieval infrastructure, which allows researchers to evaluate state-of-the-art models, fine-tune them on task-specific data, and assess performance on out-of-distribution samples. Notably, the paper highlights the poor performance of existing graph-based and domain-specific methods on the newly introduced tasks, particularly in out-of-distribution contexts, underscoring the need for context-aware, multimodal foundation models in the biomedical domain.
Claims And Evidence: The paper provides reasonably convincing evidence for the overall utility of PyTDC as a multimodal biomedical benchmarking platform. This is supported by a comprehensive case study on the single-cell drug-target nomination task, where several state-of-the-art models (including graph representation learning methods and domain-specific approaches) are benchmarked against the newly introduced contextualized tasks. The case study effectively highlights gaps in current model performance, particularly in out-of-distribution settings, supporting the need for improved multimodal, context-aware methods. The webserver and model retrieval infrastructure appear to be well-implemented and accessible.
However, the necessity for adopting the specific 'API-first' architecture and Model-View-Controller (MVC) design pattern are underexplained. While these approaches are well-established in software engineering, the paper does not clearly justify why they are uniquely advantageous for this use case compared to alternative architectures, including comparison with the previous versions. Further explanation is needed to clarify how this design improves the implementation of the pyTDC server, in perspectives including modularity, scalability, or reproducibility.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria seem appropriate for the problem domain. The datasets are well-curated across diverse modalities, and the evaluation metrics appear aligned with both machine learning performance assessment and domain-specific biological relevance, such as out-of-distribution generalizability across cell types.
Theoretical Claims: As I have understood, this manuscript does not present or rely on any formal theoretical claims or mathematical proofs, which is appropriate for a system and benchmarking paper of this nature. The focus is on the development of infrastructure, integration of multimodal data, and empirical benchmarking of biological machine learning tasks.
Experimental Designs Or Analyses: The experimental design and case study focused on the single-cell drug-target nomination task were reviewed in detail, including inspection of the provided code examples for loading datasets, training workflows, and model benchmarking results. The experiments are generally sound and well-documented, with clear pipelines for data processing, training, and evaluation.
Supplementary Material: The appendix sections and the associated web server were thoroughly reviewed, with particular focus on the newly introduced tasks, benchmark design, and evaluation metrics. The provided code samples were also reviewed and appear well-documented and modular, making it relatively straightforward for users to reproduce results or adapt the code for related tasks.
Relation To Broader Scientific Literature: The contributions of this paper align with and extend several active research domains at the intersection of multimodal biomedical machine learning, biological data platforms, and therapeutic AI modeling. Specifically, PyTDC expands existing biomedical data platforms by adding multimodal contextualization, foundation model evaluation, and task-specific benchmarking infrastructure, filling a critical gap for researchers developing the next generation of multimodal foundation models for therapeutic discovery.
Essential References Not Discussed: None to be addressed.
Other Strengths And Weaknesses: **Strengths**
The overall contribution of this paper is significant, particularly in the context of accelerating multimodal AI research for therapeutic discovery and precision medicine. By combining diverse biological data modalities with a unified benchmarking and inference platform, PyTDC has the potential to serve as a foundation for interdisciplinary studies across computational biology, cheminformatics, and clinical data science.
One of the key strengths is its integration of multimodal data with model retrieval, fine-tuning, and evaluation capabilities, which is often fragmented across different tools and platforms. The open-source nature and availability of a web interface may further lower the barrier to entry for both computational researchers and domain experts.
**Weaknesses**
See `Claims And Evidence` section paragraph 2.
Other Comments Or Suggestions: Minor comments:
1. There are several technical issues with references throughout the manuscript, particularly in the Appendix. Compilation errors resulting in `??` citations appear in multiple locations, including: Page 3 line 129 (right column), Page 21, Page 24 lines 1316~1318, Page 38
2. In Table 1, the check marks and cross marks could benefit from improved visual contrast, such as color-coding or distinct icons, to improve readability. Currently, the marks are difficult to distinguish at a glance.
3. There is confusion between the terms "pyTDC" and "TDC-2" across the manuscript and appendix. This includes Figure 1 and multiple locations throughout the appendix. The paper needs to be clearer in whether TDC-2 refers to a previous version, a distinct system, or a legacy project in each referenced points for its appearances throughout the manuscript, and ensure terminology is consistent across all sections of the paper.
Questions For Authors: 1. While the authors highlight the novelty of implementing the ‘API-first’ architecture via MVC design as one of the main contributions, there are limited demonstration or explanation to support the statement throughout the paper. How does this architecture differ from other versions or databases? In which way might the users benefit from such design? A section for supporting such claim, either quantitatively or qualitatively, may clarify the readers in the technical contribution of pyTDC architecture.
2. Another area of concern relates to the redistribution and usage of pre-trained models and datasets through the PyTDC platform. Given the diverse licensing terms associated with pre-trained models (some under open-source licenses, others with academic or non-commercial restrictions), how does PyTDC plan to:
- Clearly surface licensing terms and permissible use when users retrieve models especially through python editors?
- Ensure that models with restrictive licensing (e.g., non-commercial, academic use only) are not inadvertently used in ways that violate their original licenses?
Including a transparent licensing and provenance management layer would greatly enhance trust, reproducibility, and adoption in both academic and commercial settings, particularly for users in regulated industries like pharmaceutical research.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **However, the necessity for adopting the specific 'API-first' architecture and Model-View-Controller (MVC) design pattern are underexplained. While these approaches are well-established in software engineering, the paper does not clearly justify why they are uniquely advantageous for this use case compared to alternative architectures, including comparison with the previous versions. Further explanation is needed to clarify how this design improves the implementation of the pyTDC server, in perspectives including modularity, scalability, or reproducibility.**
We ensure up-to-date retrieval by invoking external APIs rather than storing static data. The chosen MVC design pattern and the resulting “data view” allow us to perform data manipulation per dataset definition and, as such, obtain an ML-ready dataset fitting the task or resource definition. The single-cell gene expression data retrieval API and the shown DSL-based dataset are examples of data views leveraging this pattern, which we can more clearly emphasize in the paper.
**Minor comments:
There are several technical issues with references throughout the manuscript, particularly in the Appendix. Compilation errors resulting in ?? citations appear in multiple locations, including: Page 3 line 129 (right column), Page 21, Page 24 lines 1316~1318, Page 38
...**
Thank you. We will address all of the minor comments raised in the camera-ready version of the paper.
**While the authors highlight the novelty of implementing the ‘API-first’ architecture via MVC design as one of the main contributions, there are limited demonstration or explanation to support the statement throughout the paper. How does this architecture differ from other versions or databases? In which way might the users benefit from such design? A section for supporting such claim, either quantitatively or qualitatively, may clarify the readers in the technical contribution of pyTDC architecture.**
We contrast this architecture difference with other benchmarks in B.1, noting that alternatives, including the previous version (which we can note as well), provide access to “file dumps.” We can reword and expand this to more thoroughly distinguish between static datasets provided by alternatives and our “API-first dataset.” The MVC allows us to implement data views (https://www.w3schools.com/mysql/mysql_view.asp), our chosen implementation of the “API-first dataset” abstraction.
In B.1.3 we state “[PyTDC] develops an Application-Embedded Domain-Specific Data Definition Programming Language facilitating the integration of multiple modalities by generating data views from a mapping of multiple datasets and functions for transformations, integration, and multimodal enhancements, while…”.
We agree that while all components for answering your query are present in the paper, and shared above, a more distinguishable narrative would be helpful and will write an addition to our discussion of the architecture more thoroughly detailing: The API-first architecture and the benefits of invoking external APIs, the “API-first dataset” abstraction, the choice of MVC design pattern, how that implements data views and the overarching “API-first dataset”, and illustrating via existing data views, would be helpful. We cite works demonstrating individual benefits and our contribution is composing these into an innovative design.
**Another area of concern relates to the redistribution and usage of pre-trained models and datasets through the PyTDC platform. Given the diverse licensing terms associated with pre-trained models (some under open-source licenses, others with academic or non-commercial restrictions), how does PyTDC plan to: - Clearly surface licensing terms and permissible use when users retrieve models especially through python editors? - Ensure that models with restrictive licensing (e.g., non-commercial, academic use only) are not inadvertently used in ways that violate their original licenses? - Including a transparent licensing and provenance management layer would greatly enhance trust, reproducibility, and adoption in both academic and commercial settings, particularly for users in regulated industries like pharmaceutical research.**
We surface license terms on the website for all datasets and will do so for models on our model hub pages as well. We can explore the possibility of augmenting classes to expose licenses more easily. Beyond exposing licenses, it is likely impractical for PyTDC to prevent license violations in its current state as a PyPI package (and Harvard dataverse repository). However, deploying a transparent licensing and provenance layer is certainly a requirement of interest for deploying a distributed service based on the current PyTDC platform. This has been added to our product backlog for future releases. | Summary: The paper introduces **PyTDC**, a multimodal machine learning platform designed for training, evaluation, and inference of biomedical foundation models. The platform aims to address the limitations of existing biomedical benchmarks by providing end-to-end infrastructure for integrating multimodal biological data and supporting a broad range of machine learning tasks in therapeutics. Key contributions of PyTDC include: (1) integration of single-cell analysis with multimodal machine learning, (2) continuous data updates and heterogeneous data sources, (3) model server for inference and fine-tuning, (4) case study on single-cell drug-target nomination, and (5) context-specific metrics. Overall, PyTDC aims to accelerate research in biomedical AI by providing a unified platform for multimodal data integration, model training, and evaluation, with a focus on therapeutic applications. The platform is open-source and designed to facilitate the development of context-aware, multimodal foundation models for biomedical research.
## Update after Rebuttal
I agree to accept this paper.
Claims And Evidence: As an infrastructure paper, the claims of contributions of PyTDC are concrete. The only concern for me is that the target of PyTDC is supporting "biomedical foundation models", which may be somewhat overclaiming. Biomedicine encompasses a vast range of disciplines, for which various tasks, datasets, and benchmarks have been proposed. There have also been many models claimed to be "biomedical foundation models", which actually focus on very different applications, such as clinical NLP, medical image processing, and drug discovery. Therefore, as an AI platform, PyTDC can hardly cover the whole field of "biomedical foundation models".
Methods And Evaluation Criteria: The PyTDC platform offers multiple good features for such kinds of infrastructure, including open-source datasets and plug-and-play APIs. The comparisons in Table 1 demonstrate the significant contribution of PyTDC.
Theoretical Claims: Not applicable.
Experimental Designs Or Analyses: The only experiments in Section 4.3 are completely newly built and act as a case study rather than the main contribution of this paper.
Supplementary Material: The appendix is reviewed.
Relation To Broader Scientific Literature: PyTDC reviews related works in detail, and its contribution compared to the existing works is significant.
Essential References Not Discussed: For the structure-based drug design component of PyTDC, beyond the scoring functions already included, there are two recent popular aspects to be considered in evaluating the molecules:
1. The physical plausibility of 3D structures, such as PoseCheck [1].
2. The diversity of generated molecules, new metrics including NCircles [2] and HamDiv [3].
The SBDD component may not be the main contribution of PyTDC, but as a comprehensive benchmarking platform, I believe these metrics would help.
[1] Harris C, Didi K, Jamasb A, et al. PoseCheck: Generative Models for 3D Structure-based Drug Design Produce Unrealistic Poses[C]//NeurIPS 2023 Generative AI and Biology (GenBio) Workshop.
[2] Xie Y, Xu Z, Ma J, et al. How Much Space Has Been Explored? Measuring the Chemical Space Covered by Databases and Machine-Generated Molecules[C]//The Eleventh International Conference on Learning Representations.
[3] Hu X, Liu G, Yao Q, et al. Hamiltonian diversity: effectively measuring molecular diversity by shortest Hamiltonian circuits[J]. Journal of Cheminformatics, 2024, 16(1): 94.
Other Strengths And Weaknesses: None. See above parts.
Other Comments Or Suggestions: Tiny improvements in writing could be made:
1. In Section 4.3, "PPI" and "ppi" are mixedly used.
2. Some citations in-line act as components of sentences, where \citet should be used instead of \citep. For example, the last line of page 7.
3. "Table" vs. "table", "Figure" vs. "figure", "Section" vs. "section" are mixedly used in many places, which should be unified.
Questions For Authors: 1. How many single-cell data items are there in the dataset? Is this amount enough for training a robust model?
2. Will the PyTDC API be freely available to academic researchers?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer exTK, thank you very much for your time dedicated to reviewing our work. Including responses to your queries below.
**The only concern for me is that the target of PyTDC is supporting "biomedical foundation models", which may be somewhat overclaiming. Biomedicine encompasses a vast range of disciplines, for which various tasks, datasets, and benchmarks have been proposed. There have also been many models claimed to be "biomedical foundation models", which actually focus on very different applications, such as clinical NLP, medical image processing, and drug discovery. Therefore, as an AI platform, PyTDC can hardly cover the whole field of "biomedical foundation models".**
This is true. Supporting biomedical foundation models is an aspirational goal in this case. We focus this release on single-cell foundation models and are working to add ESM3 to incorporate the protein sequence, structure, and function modalities. We can either make a stronger note of this limitation or rephrase our claim to be “foundation models in therapeutic discovery”.
**For the structure-based drug design component of PyTDC, beyond the scoring functions already included, there are two recent popular aspects to be considered in evaluating the molecules:
The physical plausibility of 3D structures, such as PoseCheck [1].
The diversity of generated molecules, new metrics including NCircles [2] and HamDiv [3].
The SBDD component may not be the main contribution of PyTDC, but as a comprehensive benchmarking platform, I believe these metrics would help.**
We agree. We are still working on the final release of the codebase, which could be released simultaneously to a camera-ready version. This suggestion has been added to the list of tasks for that release.
**Tiny improvements in writing could be made:
In Section 4.3, "PPI" and "ppi" are mixedly used.
Some citations in-line act as components of sentences, where \citet should be used instead of \citep. For example, the last line of page 7.
"Table" vs. "table", "Figure" vs. "figure", "Section" vs. "section" are mixedly used in many places, which should be unified.**
This will be done for any camera-ready version.
**How many single-cell data items are there in the dataset? Is this amount enough for training a robust model?**
Our website and appendix show the total number of items for all datasets introduced in the paper. For the case study on the single-cell drug target nomination task, we have 394760 (cell, gene) pairs in the dataset. We will mention this explicitly when we release the documentation for the pinnacle resource. This dataset was large enough to train PINNACLE and Node2Vec to satisfactory results. It is possible a larger dataset would have yielded better performance from the tested transformer models. However, we note the dataset presents what is realistically feasible to curate for these diseases based on the current literature. | Summary: PyTDC is introduced as a cutting-edge multimodal machine learning infrastructure designed to streamline the training, evaluation, and inference of biomedical foundation models. By unifying heterogeneous, continuously updated data sources and providing a model server for seamless access to pre-trained models and inference endpoints, PyTDC addresses the fragmentation in existing biomedical benchmarks. It supports a wide range of therapeutic tasks, including single-cell drug-target nomination, perturbation response prediction, and protein-peptide interaction prediction, while introducing context-specific metrics to ensure model performance aligns with biomedical goals. As an open-source platform, PyTDC accelerates research by offering modular, customizable tools for multimodal data integration, enabling the development of robust, context-aware foundation models for biomedical AI.
## Update after rebuttal
My Overall Recommendation remains the same.
Claims And Evidence: All the claims in the paper are sound with corresponding contributions in the platform. However, there are two aspects to be clarified:
- PyTDC doesn't include text data, yet natural language is a key modality in biomedicine. Is it appropriate to claim comprehensive support for biomedical foundation models without this modality?
- As a non-commercial platform, how does the group implement the continuous update of new data?
Methods And Evaluation Criteria: The platform is highly meaningful, providing essential support for the development of biomedical foundation models. The dataset collection, benchmark organization, metric design, and API architecture are well-executed and demonstrate the platform's utility.
Theoretical Claims: None.
Experimental Designs Or Analyses: This paper is not based on comparative experiments and analyses. As an open-source, API-based platform, the user experience of PyTDC will require more feedback from future users to fully evaluate its effectiveness and usability.
Supplementary Material: No supplementary material is provided.
Relation To Broader Scientific Literature: PyTDC provides a fundamental toolkit for the development of biomedical foundation models, addressing a critical gap in the field. The platform's design and capabilities are well-aligned with the needs of modern biomedical AI research.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: The contribution of PyTDC is groundbreaking in this field, offering a comprehensive and modular solution for biomedical AI research.
Other Comments Or Suggestions: None.
Questions For Authors: - What managing mechanism is adopted for continuous data updates in PyTDC?
- How does PyTDC manage the use of data with various licenses? For example, if a foundation model is trained using data accessed through PyTDC from sources with different licenses, how does the platform supervise and ensure compliance with these licenses?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer ATdV,
Thank you very much for your time dedicated to reviewing our work. Including our responses to your queries below.
**PyTDC doesn't include text data, yet natural language is a key modality in biomedicine. Is it appropriate to claim comprehensive support for biomedical foundation models without this modality?**
This is fair. Supporting biomedical foundation models is an aspirational goal in this case. We focus this release on single-cell foundation models and are working to add ESM3 to incorporate the protein sequence modality. We could more strongly emphasize this point, or adjust our claim/headline to match “foundation models in therapeutic discovery”.
**As a non-commercial platform, how does the group implement the continuous update of new data?**
This is addressed via an API-first approach. By invoking external APIs rather than storing static data, we assert up-to-date retrieval. The chosen MVC design pattern and the resulting “data view” allow us to perform data manipulation per dataset definition and, as such, obtain an ML-ready dataset fitting the task or resource definition.
**As an open-source, API-based platform, the user experience of PyTDC will require more feedback from future users to fully evaluate its effectiveness and usability.**
This is true. It is hard to fully and rigorously assess the success of this release. However, we have some very positive indicators, including a sharp increase in Github stars (now 1000+), a 2x increase in the Pypi package's MAU, and ~5k invocations of the model server APIs in the last month alone.
**What managing mechanism is adopted for continuous data updates in PyTDC?**
Our platform emphasizes an API-first approach rather than storing static data, thus ensuring up-to-date retrieval. However, for static datasets, we cannot yet guarantee continuous data updates.
**How does PyTDC manage the use of data with various licenses? For example, if a foundation model is trained using data accessed through PyTDC from sources with different licenses, how does the platform supervise and ensure compliance with these licenses?**
We don’t yet have rigorous licensing enforcement. However, we expose all licenses for datasets on our website and urge users to review permits for the datasets they are using. | null | null | null | null | null | null |
CoreMatching: A Co-adaptive Sparse Inference Framework with Token and Neuron Pruning for Comprehensive Acceleration of Vision-Language Models | Accept (poster) | Summary: CoreMatching combines token sparsity and neuron sparsity through the interaction between core neurons and core tokens, achieving comprehensive acceleration of VLMs. The proposed projection-guided criterion provides a more accurate way to evaluate token importance, and the co-adaptive framework effectively reduces computational costs while maintaining high performance, which has important practical significance for deploying VLMs on resource-constrained devices.
Claims And Evidence: The claims in this paper should be correct.
Methods And Evaluation Criteria: The used evaluation criteria is standard in token pruning / merging. The experimental results are impressive.
Theoretical Claims: The theoretical claims seem to be solid.
Experimental Designs Or Analyses: My main concerns are about experimental verification.
1. **Compatibility with flash-attention**: Is this technique compatible with flash-attention? As an acceleration-focused work, it should align with mainstream acceleration frameworks. I did not see any discussion from the authors about this.
2. **Necessity of Core Neurons**: The paper claims "The angle should be an essential indicator for token importance evaluation". While I agree with this claim, why not directly integrate the angle and attention score instead of using core neurons to guide core token selection? Is there experimental evidence supporting the superiority of this design choice?
3. **Insufficient Validation of Generalization**:
(1) **Video datasets**: Many existing methods report performance on video datasets, _e.g._, Video MME, but CoreMatching does not provide such results.
(2) **LVLMs using dynamic resolution**: LLaVA-1.5 does not use dynamic resolution. How does CoreMatching perform on LVLMs using dynamic resolution (e.g., Qwen2.5-VL or LLaVA-NEXT)?
4. **Applicability to Training Acceleration**: Can CoreMatching be extended to accelerate training?
Supplementary Material: I have carefully reviewed the 'Experiments Settings' section in the supplementary materials. The 'Latency on NVIDIA A100' experiments are also impressive.
Relation To Broader Scientific Literature: n/a
Essential References Not Discussed: Token pruning and merging are two critical technical paths for token sparsity. While the authors have thoroughly compared pruning-based methods, there may be a lack of comparison against merging-based methods, _e.g._, [1,2].
[1] Daniel Bolya, Cheng-Yang Fu, Xiaoliang Dai, Peizhao Zhang, Christoph Feichtenhofer, and Judy Hoffman. Token merging: Your vit but faster. In International Conference on Learning Representations, 2023.
[2] Chen Ju, Haicheng Wang, Haozhe Cheng, Xu Chen, Zhonghua Zhai, Weilin Huang, Jinsong Lan, Shuai Xiao, and Bo Zheng. Turbo: Informativity-driven acceleration plug-in for vision-language large models. In European Conference on Computer Vision, pages 436–455. Springer, 2025.
Other Strengths And Weaknesses: n/a
Other Comments Or Suggestions: n/a
Questions For Authors: Please refer to comments about experiments and references. In addition, will you open-source the code after acceptance?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer cD4w,
We would like to thank you for taking the time to review our paper and provide valuable feedback. We appreciate the opportunity to address your questions and concerns. We believe these discussions and revisions will help further improve the paper.
***Q1. Is this technique compatible with FlashAttention?***
**A1.** Thank you for your valuable question. **Our method is fully compatible with FlashAttention**, as CoreMatching sparsifies in the FFN block, orthogonally complementing FlashAttention in the attention block. *Our implementation already leverages FlashAttention* by default through the LLaVA-1.5 framework, and we will clarify this explicitly in the revision.
***Q2. Why not directly use angle instead of core neurons to guide core token selection?***
**A2.** Thank you for your insightful comments.
1. **Theoretically, angle-based metrics alone may cause conflicts when combined directly with neuron sparsity, due to inconsistent token-neuron interactions.** Our neuron-guided token sparsification ensures consistency—core tokens activate core neurons, stabilizing neuron activation and preventing negative interference.
2. **Empirically, our new experiments (Tab. 4, provided link: https://drive.google.com/file/d/1tYVMhd12V6Qgx6aqyLEK1rG8lCPOJvcb/view?usp=sharing) demonstrate improved task performance from combining neuron-guided token selection with neuron sparsity.** Using angular metrics without alignment indeed degraded performance, supporting our theoretical insights. We agree angle-based metrics remain valuable for token sparsity and will expand this discussion accordingly.
We will expand this discussion in the revised paper and thanks for your question.
***Q3. More experiments on video datasets.***
**A3.** Thank you for the constructive suggestion. We have conducted additional experiments on video datasets, and the results are presented in Tab. 3 of the anonymous link below:
https://drive.google.com/file/d/1tYVMhd12V6Qgx6aqyLEK1rG8lCPOJvcb/view?usp=sharing
As shown, CoreMatching continues to outperform advanced acceleration methods such as FastV and PruMerge on video tasks. Interestingly, due to the high redundancy in video frames, token pruning sometimes even improves performance over the original model. This highlights CoreMatching’s strong potential in accelerating video LLMs. We will include these results in the revised version. Thank you again for your valuable suggestion.
***Q4. How does CoreMatching perform on LVLMs using dynamic resolution?***
**A4.** Thank you for the helpful question. We have added experiments on Qwen2.5-VL across OCR, chart, and document understanding benchmarks. These results, along with additional experiments on LLaVA, are shown in **Tab.2** of the anonymous link:
https://drive.google.com/file/d/1tYVMhd12V6Qgx6aqyLEK1rG8lCPOJvcb/view?usp=sharing
As shown, CoreMatching achieves near-lossless performance even after pruning approximately 90% of the tokens, demonstrating its effectiveness in state-of-the-art LVLMs with dynamic resolution. We will include these results in the revised version. Thank you again for your thoughtful suggestion.
***Q5. Can CoreMatching be extended to accelerate training?***
**A5.** Thank you for the constructive question. We believe CoreMatching can significantly accelerate training, with substantial potential:
- **Forward pass:** CoreMatching enhances efficiency in both pre-filling and decoding, benefiting supervised fine-tuning and RL-based training (e.g., addressing decoding bottlenecks in RL methods like GRPO).
- **Backward pass:** CoreMatching reduces computational costs during gradient updates by activating only core neurons (thus updating fewer weights) and pruning tokens (reducing memory usage and computation).
We plan to explore this in future work and will add relevant discussion in the revised version.
***Q6. Essential references not discussed: lack of comparison against merging-based methods.***
**A6.** Thank you for pointing out this important omission. We are happy to provide the following discussion:
In fact, CoreMatching is complementary to merging-based approaches. On one hand, the angle similarity and neuron activation metrics proposed in CoreMatching can serve as alternative importance or similarity scores in merging-based methods, replacing conventional attention-based metrics. On the other hand, token merging strategies can also be incorporated into CoreMatching to better retain globally informative tokens for complex tasks.
We will cite the suggested works and add a discussion in the revised version. Thank you again for the valuable recommendation.
***Q7. Will you open-source the code after acceptance?***
**A7.** Thank you very much for your interest in our work. **Yes —** **we fully intend to release the code upon acceptance**.
Sincerely,
Authors | Summary: The authors explore a fundamental question on jointly leveraging token and neural sparsity to enhance the inference efficiency of vision-language models (VLMs). The paper introduces the concept of core neurons and investigates their correspondence with core tokens. Building on this observation, the authors propose CoreMatching, a co-adaptive sparse inference framework that exploits the interplay between token and neuron sparsity to accelerate VLMs. Experimental results demonstrate that the proposed method outperforms state-of-the-art baselines across ten image understanding tasks and three hardware platforms.
## update after rebuttal
I appreciate the authors’ detailed responses, which have successfully addressed my concerns.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: The proof is technically sound.
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes
Relation To Broader Scientific Literature: This paper proposes a novel approach that jointly applies token sparsity and model sparsity to accelerate VLLMs.
Essential References Not Discussed: No
Other Strengths And Weaknesses: The paper is well-motivated, addressing an interesting, fundamental, and important research question on VLM efficiency. It is well-structured, beginning with preliminary concepts and systematically deriving the relationship between core neurons and core tokens. The experimental results are compelling, demonstrating the effectiveness of the proposed approach.
Other Comments Or Suggestions: Please see the questions.
Questions For Authors: 1. In the proposed method, the authors retain the top $\rho\%$ most activated neurons as token-wise core neurons and identify the most frequent $\beta\%$ neurons as sentence-wise core neurons. A key question is how to determine these two hyperparameters to effectively balance token sparsity and neuron sparsity.
2. The proposed method focuses solely on core neurons in the FFN block. Could attention mechanisms also be incorporated to further enhance efficiency?
3. In the right column of L128, “3%” should be “4.6%”.
4. I am curious about the choice of hardware for the throughput experiments. Why did the authors use the NVIDIA Titan XP (12GB)? Given that this GPU is relatively old and may not reflect the performance of modern hardware, could the authors clarify the rationale behind this selection?
5. In Section 4.3, the authors state that token-only sparsity primarily accelerates the pre-filling stage with minimal impact on decoding, while neuron-only sparsity mainly speeds up the decoding stage with little improvement during pre-filling. This observation is interesting. Providing additional explanations or insights into the underlying reasons would further enhance the clarity and impact of this discussion.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer T5zD,
Thank you for your thoughtful review and encouraging feedback. We're pleased you found the paper well-motivated and well-structured, and we welcome the opportunity to address your comments to further improve our work.
***Q1. How to determine these two hyperparameters to balance token sparsity and neuron sparsity?***
**A1:** Thank you for your constructive question. **In fact, our method does not require manual balancing between token sparsity and neuron sparsity.** During the pre-filling stage, we first dynamically select important tokens via the parameter-free maximum geometric distance strategy. During the decoding stage, we determine core neurons based on core tokens using two hyperparameters (α, β). The key steps are summarized below:
1. **Token selection strategy**: For selecting core tokens, we adopt a maximum geometric distance strategy to adaptively select informative tokens based on each input, avoiding manual sparsity settings. Tab.4 reports the average number of tokens retained across tasks.
2. **Neuron selection strategy**: For selecting core neurons, we selected based on core tokens, ensuring theoretical consistency between token and neuron sparsity: core tokens tend to activate more core neurons. As for the hyperparameters α and β, which determine the retained neuron ratio, we refer to the detailed analysis in CoreInfer[1]. We also conducted ablation studies on LLaVA (see Tab. 1 in the main text), and ultimately adopted α=0.2 β=0.4 as the optimal values for balancing performance and efficiency.
We will highlight this point more clearly in the revised version. Thank you again for your thoughtful question.
***Q2. Could attention mechanisms also be incorporated?***
**A2.** Thank you for your question. Regarding acceleration within the Attention Block, we would like to offer the following clarifications:
1. **Although our method is primarily applied in the FFN block, it also contributes to reducing the computational cost of the Attention block.** This is because our approach jointly sparsifies tokens and neurons in a unified pass, significantly reducing the sequence length input to Attention layers. Since attention complexity in VLMs scales linearly with token count, our token sparsification notably accelerates attention computation.
2. **Our method is compatible with other attention acceleration techniques.** For instance, our implementation already integrates the widely-used Flash Attention. As our sparsification occurs within the FFN block, it can seamlessly combine with other attention acceleration methods for further efficiency improvements.
We appreciate your insightful question and will highlight this point more clearly in the revised version.
***Q3. In the right column of L128, “3%” should be “4.6%.”***
**A3.** Thank you for your careful reading and for pointing out this issue. Upon verification, you are correct — the accuracy drop should indeed be 4.6% rather than 3%. We will correct this in the revised version of the paper. We sincerely appreciate your attention to detail.
***Q4. Why did the authors use the NVIDIA Titan XP?***
**A4.** Thank you for the thoughtful question. Our experiments covered three GPUs to highlight device generality: **Titan XP** (low-performance; Fig. 9, main text), **RTX 6000** (mid-performance; Fig. 10, main text), and **A100** (high-performance; Fig. 16, Appendix). We selected Titan XP for primary comparisons as it exemplifies resource-constrained environments—where inference acceleration is critical—and notably cannot efficiently run the unoptimized 7B model due to memory limitations. Our results on RTX 6000 and A100 also demonstrate consistent speedups across different hardware configurations. We thank you again for the thoughtful question.
***Q5. Explanation about token sparsity benefiting pre-filling and neuron sparsity benefiting decoding.***
**A5.** Thank you for your interest—we are happy to clarify:
- **Token sparsity** primarily benefits the pre-filling stage, where many tokens (especially image tokens in VLMs) are processed simultaneously, and computational cost scales linearly with sequence length. Thus, reducing tokens significantly lowers overhead here. During decoding, however, tokens are processed sequentially with cached keys and values, limiting token sparsity's impact mainly to minor KV-cache computations.
- **Neuron sparsity** primarily accelerates the decoding stage. Decoding processes single tokens individually, with FFNs dominating computation. Sparsifying neurons effectively reduces FLOPs in this stage. In pre-filling, batching multiple tokens makes neuron-level sparsification impractical, as full activation tracking remains necessary.
We greatly appreciate your constructive suggestion and will expand this discussion in the revised version.
Sincerely,
Authors
[1] CoreInfer: Accelerating Large Language Model Inference with Semantics-Inspired Adaptive Sparse Activation. | Summary: This paper proposes CoreMatching, combining the sparsity of core neurons and core tokens for VLMs. Experiments show that CoreMatching excels in both model accuracy and inference efficiency.
Claims And Evidence: The claims (i.e., model accuracy and deployment efficiency) are clear. Theoretical analysis and efficiency evaluations are provided to support the claim.
Methods And Evaluation Criteria: Yes, both the methods and evaluation criteria are clear.
Theoretical Claims: I check all the proofs.
Experimental Designs Or Analyses: I check the experiments. One concern is the lack of accuracy comparison with PowerInfer [1], and PowerInfer-2 [2]. Another concern is that I am curious about the accuracy of some ocr (high-resolution) tasks, e.g., DocVQA & InfoVQA.
[1] Song Y, Mi Z, Xie H, et al. Powerinfer: Fast large language model serving with a consumer-grade gpu[C]//Proceedings of the ACM SIGOPS 30th Symposium on Operating Systems Principles. 2024: 590-606.
[2] Xue Z, Song Y, Mi Z, et al. Powerinfer-2: Fast large language model inference on a smartphone[J]. arXiv preprint arXiv:2406.06282, 2024.
Supplementary Material: I review all the supplementary material.
Relation To Broader Scientific Literature: This paper contributes to the efficiency improvement of deploying VLMs. Combining the sparsity of core neurons and core tokens.
Essential References Not Discussed: Not found.
Other Strengths And Weaknesses: This article provides a detailed discussion on the sparsity of core neurons and core tokens, but the technical contribution is insufficient, as it merely combines these two approaches.
Other Comments Or Suggestions: No.
Questions For Authors: 1. More experiments. As in the 'Experimental Designs Or Analyses' part.
2. More discussions regarding technical contribution, as in the 'Other Strengths And Weaknesses' part.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer LK6j,
We sincerely thank you for taking the time to thoroughly read our paper and for your positive evaluation. We are excited to have the opportunity to address your questions and concerns. These discussions and revisions will further strengthen our work.
***Concern 1. Lack of accuracy on some OCR tasks.***
**A1.** Thank you for raising this valuable concern. In response, we have conducted additional experiments on OCR, chart understanding, and document understanding benchmarks using both LLaVA and Qwen models. The results are presented in Tab.1 and Tab.2 of the anonymous link below:
https://drive.google.com/file/d/1tYVMhd12V6Qgx6aqyLEK1rG8lCPOJvcb/view?usp=sharing
Specifically, we supplemented the following two experiments: (1) On LLaVA-1.5-7B/13B, CoreMatching outperforms FastV and PruMerge on OCR and chart/document tasks with minimal performance drop, using on average less than 10% of tokens (Tab.1). (2) On Qwen2.5-VL-3B/7B (dynamic-resolution LVLMs), CoreMatching maintains near-lossless performance with ~10% of tokens and even surpasses the original model on AI2D (Tab.2).
We will include these results in the updated version of our paper. Thank you again for your helpful suggestion.
***Concern 2. Lack of accuracy comparison with PowerInfer and PowerInfer-2.***
**A2.** We appreciate your constructive suggestion and would like to clarify why such comparison experiments were not included in our current submission:
1. **Different Target Domains.** PowerInfer 1&2 are primarily designed for accelerating LLMs, while our work specifically focuses on accelerating VLMs. It remains unclear whether PowerInfer-style methods can be effectively adapted to popular VLM architectures such as LLaVA. Therefore, it is unfair to directly put PowerInfer 1&2 into VLM and compare them with our method.
2. **Different Usage Costs.** PowerInfer 1&2 are based on predictors trained specifically for each model, while our method is training-free. To conduct a direct comparison, we would need to re-implement both PowerInfer 1&2 and retrain layer-wise prediction modules from scratch for VLM architectures, which is computationally expensive and unfair to our training-free methods.
We appreciate your suggestion to include comparison with neuron sparsity methods. We will make our best effort to incorporate these experiments in the revised version. Once again thank you for your valuable suggestion.
***Concern 3. The article's technical novelty is limited, as it merely combines two existing approaches.***
**A3.** Thank you for bringing up this concern, though we respectfully disagree. Below, we outline our reasoning in detail:
- **Methodologically, our approach is not a simple combination of existing token pruning techniques.** To the best of our knowledge, our work is the first to utilize *neurons* to guide token sparsity in VLMs. *While prior token pruning methods often rely on attention-score-based importance metrics, our work is the first to demonstrate the suboptimality of such metrics (see Section 3.2).*
- **Theoretically**, **we propose a novel interpretation of the interaction between the neuron and token dimensions in VLMs**, offering new insights into how VLMs perceive visual and semantic content. In Section 3.2, we present two theoretical insights. *These insights shed light on how the FFN and Attention blocks operate and interact.* We believe this theoretical framework can serve as a foundation for future innovations and a deeper understanding of both VLMs and LLMs.
- **Experimentally, our approach outperforms existing approaches that simply combine token sparsity and neuron sparsity in terms of both task performance and effectiveness.** In terms of efficiency, our method achieves double sparsity using only a single forward pass in the neuron dimension without twice computation; in terms of performance, our core tokens and core neurons have consistency guarantees: core tokens tend to activate more core neurons, and retaining only core tokens helps stabilize core neuron activation. This ensures that combining neuron and token sparsity in our method does not introduce performance conflicts - this is the main limitation of simply merging the two sparsity techniques.
In summary, we argue that our work does not merely combine neuron and token sparsity, but instead proposes a novel and unified approach that integrates the two dimensions. This comprehensive integration is crucial for accelerating VLM inference in a principled and effective manner. We will highlight these contributions more explicitly in the revised version. Thank you again for raising this important concern.
Sincerely,
Authors | Summary: This paper proposes a sparse inference framework to reduce the inference latency of Vision-Language Models (VLMs). The main method combines token compression and neural unit compression techniques, and establishes a connection between the two. Through the CoreMatching approach, it is possible to significantly reduce the inference latency and memory consumption of the LLaVA model without substantially compromising the model's performance. The method does not require additional training and has great application potential.
Claims And Evidence: NA
Methods And Evaluation Criteria: NA
Theoretical Claims: NA
Experimental Designs Or Analyses: NA
Supplementary Material: NA
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths:
The sparse inference framework CoreMatching proposed in the paper shows good results in experiments. Both the theoretical arguments and experimental validations in the paper are quite comprehensive.
Weaknesses:
No obvious weaknesses.
It is recommended to conduct further demonstrations on more tasks, especially OCR tasks that are highly sensitive to the number of visual tokens, some dense OCR scenarios, and some chart understanding tasks. Another suggestion is to perform ablation studies on high-resolution images.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer TfEt,
We sincerely thank you for taking the time to review our paper and for providing valuable feedback. We are pleased to hear that you found both our theoretical analysis and experimental validation to be comprehensive. We appreciate the opportunity to address your suggestions, which we believe will further strengthen our work.
***Suggestion1. It is recommended to conduct further demonstrations on more tasks, especially OCR tasks that are highly sensitive to the number of visual tokens, some dense OCR scenarios, and some chart understanding tasks.***
**A1.** We greatly appreciate your constructive suggestion. In response, we have conducted additional experiments on OCR, chart understanding, and document understanding tasks. The results are summarized in Table 1 and Table 2 of the anonymous link below:
https://drive.google.com/file/d/1tYVMhd12V6Qgx6aqyLEK1rG8lCPOJvcb/view?usp=sharing
Specifically, we supplemented the following two experiments:
**Experiment 1. Results on the LLaVA-1.5 model on more tasks.** To facilitate comparison with existing acceleration methods, we conducted extensive evaluations on LLaVA-1.5-7B and LLaVA-1.5-13B across additional OCR and chart/document understanding tasks. We also reproduced and collected results for FastV and PruMerge on these tasks to ensure a fair comparison. As shown in Tab.1 of the linked document, CoreMatching consistently outperforms other acceleration methods on most tasks while incurring only minimal performance degradation compared to the original models. Notably, for tasks such as DocVQA and InfoVQA, which often require understanding only small portions of text within an image, CoreMatching’s dynamic token pruning allows it to achieve near-lossless performance using significantly fewer tokens—on average, less than 10% of the original token count.
**Experiment 2. Results on the Qwen2.5-VL model.** To further evaluate our method on more advanced models, we conducted experiments on Qwen2.5-VL-3B and Qwen2.5-VL-7B, which are LVLMs utilizing dynamic resolution. As shown in Tab.2 of the linked document, CoreMatching continues to demonstrate strong performance, achieving near-lossless results using only around 10% of the tokens. Remarkably, it even outperforms the original model on AI2D, highlighting the effectiveness and adaptability of CoreMatching on state-of-the-art dynamic-resolution LVLMs.
We will include these experimental results in the final version of the paper. Thank you again for your insightful and constructive feedback!
***Suggestion2. Another suggestion is to perform ablation studies on high-resolution images.***
**A2.** Thank you for your valuable suggestion. In response, we conducted ablation studies on DocVQA, InfoVQA, and ChartQA—datasets composed of scanned documents or detailed visual charts that typically require models to process localized and densely packed text. For example, many questions in DocVQA reference specific words or lines within the document, making high-resolution input essential for accurate understanding.
To preserve image resolution during VLM inference, we used the Qwen2.5-VL model, which supports dynamic resolution. We performed ablation studies to evaluate the impact of different components of our method by testing performance under three settings: token-only sparsity, neuron-only sparsity, and combined sparsity. We evaluated both task performance and inference efficiency on NVIDIA Titan Xp, a resource-constrained device representative of real-world deployment scenarios. The results are shown in the table below.
| | **DocVQA** | **InfoVQA** | **ChartQA** | **Pre-filling (s)** | **Decoding(s)** |
| -------------------- | ---------- | ----------- | ----------- | ------------------- | --------------- |
| Qwen2.5-VL-3B | 93.9 | 77.1 | 84.0 | 2.3 | 4.5 |
| Neuron Sparsity Only | 93.2 | 77.0 | 83.7 | 2.3 | 1.6 |
| Token Sparsity Only | 93.5 | 77.1 | 83.8 | 1.2 | 4.3 |
| CoreMatching | 93.2 | 77.1 | 83.6 | 1.0 | 1.4 |
As demonstrated, token-only and neuron-only sparsity each result in negligible performance degradation, but only accelerate a single stage of inference. In contrast, our combined method maintains near-identical performance while providing significant speedups in both the pre-filling and decoding stages, highlighting its practical value in high-resolution scenarios.
**We will include these results in the revised version of the paper. Additionally, we plan to add more qualitative examples on high-resolution images to further illustrate the applicability of our approach.** We sincerely appreciate your insightful and constructive feedback.
Sincerely,
Authors | null | null | null | null | null | null |
Positive-unlabeled AUC Maximization under Covariate Shift | Accept (poster) | Summary: In this paper, the authors focus on addressing the problem of AUC maximization under distribution shifts between training and testing data, specifically under the scenario of covariate shift, where the input distributions of the training and test data differ, but the conditional distribution of the class labels remains unchanged. To tackle this issue, the authors theoretically derive two estimators of the AUC risk, and combine positive instances with unlabeled data for training, without the need for negative examples or class-prior information. Extensive experiments on benchmark datasets validate the effectiveness of the proposed method.
Claims And Evidence: The claims presented in the paper are supported by clear theoretical derivations and experimental validation.
Methods And Evaluation Criteria: The proposed method fills the gap in current AUC maximization techniques under covariate shift.
Theoretical Claims: The theoretical claims in the paper are both correct and rigorous.
Experimental Designs Or Analyses: Overall, the experiments are comprehensive, but I still have a few concerns: these include the absence of experiments under noisy conditions and concerns regarding some outliers in the experimental results. The specific details will be provided in the Questions.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The contribution of this paper lies in proposing a new AUC optimization method that addresses covariate shift in weakly supervised learning, building on Kumagai et al.'s (2024) work on positive class distribution shift and Lu et al.'s (2022) advancements in importance weighting methods for domain adaptation and transfer learning.
Kumagai, A., Iwata, T., Takahashi, H., Nishiyama, T., and Fujiwara, Y. Auc maximization under positive distribution shift. In NeurIPS, 2024.
Lu, N., Zhang, T., Fang, T., Teshima, T., and Sugiyama, M. Rethinking importance weighting for transfer learning. In Federated and Transfer Learning, pp. 185–231. Springer, 2022.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
(1) The method proposed in the paper is innovative and addresses a key issue regarding covariate shift in AUC maximization, an area that has not yet been explored in the literature.
(2) The two estimators of the test AUC risk under covariate shift proposed in the paper are effective. This is demonstrated both theoretically and experimentally.
(3) The paper is well-written and easy to understand.
Weaknesses:
(1) The paper lacks a clear explanation of the motivation for deriving the two estimators of the AUC risk on the test distribution under covariate shift.
(2) The proposed estimators are not unbiased. But there is no analysis of their error bounds.
(3) The structure of the paper is essentially identical to that of Kumagai et al. (2024).
Other Comments Or Suggestions: (1) Has the issue of noise been considered? When the unlabeled data contains significant noise or redundant information, what impact would this have on the performance of the proposed AUC maximization method?
(2) The paper explores a new scenario of "covariate shift" and argues that this scenario more closely approximates real-world situations. However, in real-world applications, such inconsistencies might also manifest as changes in class distributions (positive distribution shift) or shifts in class priors. In these inconsistent scenarios, can the proposed method still effectively perform AUC maximization? Does the paper address these issues and provide corresponding designs? Is the proposed method solely applicable to covariate shift?
Questions For Authors: (1) The design in the paper relies on the sigmoid loss function. Would other loss functions be compatible with this solution approach?
(2) Why do the PAUC results in Table 1 perform particularly poorly, even being the worst in most cases? They are troublingly bad.
(3) The results on the SVHN dataset are consistently anomalous. For example, in Table 1, the UDAUC method exhibits unexpectedly poor results on the SVHN dataset compared to the other three datasets. In Table 2, the experimental results on SVHN show an anomalous reverse pattern compared to the other datasets.
(4) Fig.2 shows the importance weight distribution on the FashionMNIST dataset when the positive class prior is 0.1. Does this result suggest that the method might overly rely on the test data during training? Could this lead to overfitting the test data? Furthermore, how do the distributions behave when the priors are set to 0.01/0.05? Would they show more pronounced features?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your positive comments and constructive feedback.
We added experimental results in the anonymous URL: https://anonymous.4open.science/r/icml25s-6C74/results.pdf
> Weaknesses: (1) The paper lacks a clear explanation of the motivation for deriving the two estimators of the AUC risk on the test distribution under covariate shift.
We have described the motivation for deriving the two AUC risk estimators in Section 4.2 and the fourth paragraph of Section 1.
Specifically, under the covariate shift, existing AUC maximization methods cannot minimize the test AUC risk.
To deal with this, we derived two AUC risk estimators that can approximate the test AUC risk.
Since these estimators are calculated with different data for classifier learning (i.e., Eqs. (17) and (19)), we can use all available data $X ^{{\rm p}} _{{\rm tr}} \cup X _{{\rm tr}} \cup X _{{\rm te}}$ for classifier learning by using both estimators (i.e., Eq. (20)). We will clarify this.
> there is no analysis of their error bounds.
Thank you for the important feedback. We would like to work on analyzing the error bounds in the future.
> (1) Has the issue of noise been considered? When the unlabeled data contains significant noise or redundant information, what impact would this have on the performance of the proposed AUC maximization method?
Yes, we have considered the issue of noise.
This is because unlabeled data can be regarded as noisy negative data since it contains a small number of positive data (noise).
As class-prior $\pi$ becomes large (the noise in unlabeled data becomes large), the performance of our method tends to decrease, as described in Table 1. Nevertheless, our method outperformed the other methods.
> (2) ... in real-world applications, such inconsistencies might also manifest as changes in class distributions (positive distribution shift) or shifts in class priors. In these inconsistent scenarios, can the proposed method still effectively perform AUC maximization?
Thank you for the insightful question.
Please see the answer to the last comment of Reviewer Gocn.
> (1) The design in the paper relies on the sigmoid loss function. Would other loss functions be compatible with this solution approach?
We can derive the same form of the loss function of Eqs. (17) and (19) when using symmetric loss functions as described in Lines 259--267.
The symmetric functions include many popular losses, such as sigmoid, ramp, and unhinged losses.
> (2) Why do the PAUC results in Table 1 perform particularly poorly, even being the worst in most cases? They are troublingly bad.
PAUC strongly depends on the assumption that the negative-conditional density does not change between the training and testing to derive the loss function.
However, the negative density changed significantly in our datasets.
In addition, as noted in the PAUC paper, PAUC performs poorly when $\pi_{{\rm te}}$ is small due to its reliance on extracting positive data from unlabeled test data.
For these reasons, PAUC did not work well in our setting.
Note that we have confirmed that PAUC performs well in the setting used in the PAUC paper.
> (3) The results on the SVHN dataset are consistently anomalous.
Since each dataset has different properties, it is unsurprising that the trends of the results differ across datasets.
Meanwhile, the SVHN is not the only data that shows different patterns.
For example, although trPU, tePU, and CPU worked relatively well in MNIST and FashionMNIST, they performed poorly (AUC is about 0.5) in both SVHN and CIFAR10 in Table 1.
> (4) Fig.2 shows the importance weight distribution on the FashionMNIST dataset when the positive class prior is 0.1. Does this result suggest that the method might overly rely on the test data during training? Furthermore, how do the distributions behave when the priors are set to 0.01/0.05?
This result does not mean our method overly relies on the test data for classifier learning.
This is because the (estimated) importance weights are used for only PU data in the training distribution in Eqs. (17) and (19).
As described in the second paragraph of Section 5.1, the covariate shift was created so that the high- and low-density regions of the inputs were reversed between the training and testing.
Since importance weights are defined as density-ratio $w({\bf x}) = p_{{\rm te}}({\bf x})/p_{{\rm tr}}({\bf x})$, test data would tend to have larger weights than training data.
(This is also valid for the relative density-ratio in Eq. (21)).
Figure 2 examined whether the estimated weights correctly reflect this characteristic.
When the class-priors are $0.01$ or $0.05$, similar results are obtained in Figure A in the anonymous URL.
We will clarify this. | Summary: The paper addresses the problem of maximizing the AUC in binary classification tasks under covariate shift, where the input distribution changes between training and test phases, but the conditional distribution of the class label given the input remains the same. The authors propose a novel method that leverages positive and unlabeled (PU) data from the training distribution and unlabeled data from the test distribution. They derive two estimators for the test AUC risk: one based on importance-weighted PU data from the training distribution, and another based on importance-weighted positive data from the training distribution and unlabeled data from the test distribution. The final loss function is a weighted sum of these two estimators. The authors also introduce a dynamic approach for importance weight estimation and classifier learning, which iteratively updates the importance weights and the classifier.
Claims And Evidence: The claims made in the paper are generally supported by experimental results.
Methods And Evaluation Criteria: The proposed methods are well-suited for the problem of AUC maximization under covariate shift.
Theoretical Claims: I checked the correctness of the proofs for the theoretical claims in the paper. Specifically, I reviewed the derivation of Lemma B.1 in Appendix B, which is used to prove Eq. (15). This proof is clear and correct.
Experimental Designs Or Analyses: The scale of the adopted datasets is too small, e.g., only "50 positive and 3,000 unlabeled data in the training distribution and 3,000 unlabeled data in the test distribution". In fact, for benchmarks like MNIST/CIFAR10, the authors could use much more examples (e.g., 10000+ samples) for training and test. Besides, more recent AUC optimization methods and PU learning methods should be compared.
Supplementary Material: Yes, I reviewed the supplementary material, which provides additional details supporting the main paper. It includes discussions on the limitations of the proposed method under the covariate shift assumption, the derivation of Eq. (15) with the proof of Lemma B.1, and details on the neural network architectures and hyperparameters used in the experiments. Additionally, it presents further experimental results, such as the impact of varying the number of labeled positive data, the effect of the relative parameter α, and results on tabular datasets.
Relation To Broader Scientific Literature: The key contributions of this paper are closely related to several areas of the broader scientific literature, including AUC maximization, covariate shift adaptation, and positive-unlabeled (PU) learning.
Essential References Not Discussed: While the paper covers a broad range of related work, there are a few essential references that could provide additional context for the key contributions:
1. Nakajima S, Sugiyama M. Positive-unlabeled classification under class-prior shift: a prior-invariant approach based on density ratio estimation
2. Zhao Y, Xu Q, Jiang Y, et al. Dist-pu: Positive-unlabeled learning from a label distribution perspective
3. Jain S, White M, Radivojac P. Estimating the class prior and posterior from noisy positives and unlabeled data
Other Strengths And Weaknesses: Strengths:
1. The paper addresses a significant problem in PU learning under covariate shift, which is common in real-world applications.
2. The paper is generally well-written, explaining the proposed method, with detailed descriptions of each module.
Weaknesses:
1. The paper draws on the importance weighting framework from covariate shift literature (e.g., Sakai, T. and Shimizu, N. Covariate shift adaptation on learning from positive and unlabeled data. In AAAI, 2019) but adapts it for AUC maximization (e.g., Kumagai, A., Iwata, T., Takahashi, H., Nishiyama, T., and Fujiwara, Y. AUC maximization under positive distribution shift. In NeurIPS, 2024). In general, the novelty might be somewhat incremental.
2. The method assumes a specific type of distribution shift (covariate shift), which may limit its applicability in scenarios with other types of shifts.
3. The experimental design is insufficient to convincingly support the paper’s claims.
Other Comments Or Suggestions: None.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and constructive feedback.
We added experimental results in the anonymous URL: https://anonymous.4open.science/r/icml25s-6C74/results.pdf
> The scale of the adopted datasets is too small, e.g., only "50 positive and 3,000 unlabeled data in the training distribution and 3,000 unlabeled data in the test distribution". In fact, for benchmarks like MNIST/CIFAR10, the authors could use much more examples (e.g., 10000+ samples) for training and test. Besides, more recent AUC optimization methods and PU learning methods should be compared.
Thank you for the constructive suggestion. We performed the additional experiments on larger datasets.
For each dataset, we used 100 positive and 9,000 unlabeled data in the training distribution and 9,000 unlabeled data in the test distribution for training.
In addition, we included the recent PU learning method (PURA) [a] for comparison. Since it is designed for ordinary PU learning, it used PU data in the training distribution.
Margin $\rho$ was selected from $\\{0.1,1,10\\}$ by validation data.
The results are described in Table A of the anonymous URL.
Our method outperformed the other methods.
Since PURA does not consider distribution shift, it did not work well.
We will include these results in the revised paper.
[a] Positive-Unlabeled Learning with Label Distribution Alignment, TPAMI'23
> While the paper covers a broad range of related work, there are a few essential references that could provide additional context for the key contributions:
Thank you for sharing the papers.
We briefly explain the difference between our work and these papers.
The paper [1] proposed a PU learning method under class-prior shift. Unlike our method, it cannot deal with the covariate shift.
The paper [2] proposed a PU learning method with label distribution consistency, but it cannot treat distribution shifts.
The paper [3] proposed a PU learning method from noisy positive and unlabeled data. It cannot treat distribution shifts.
In addition, all these methods are not designed for AUC maximization.
We will include a more detailed discussion of these papers in Section 2.
> The paper draws on the importance weighting framework from covariate shift literature (e.g., Sakai, T. and Shimizu, N. Covariate shift adaptation on learning from positive and unlabeled data. In AAAI, 2019) but adapts it for AUC maximization (e.g., Kumagai, A., Iwata, T., Takahashi, H., Nishiyama, T., and Fujiwara, Y. AUC maximization under positive distribution shift. In NeurIPS, 2024). In general, the novelty might be somewhat incremental.
The novelty of our paper is mainly twofold.
The first is to propose a novel and significant problem setting of AUC maximization under covariate shift with PU data.
The second is to theoretically derive two novel AUC risk estimators to address this problem based on importance weighting, effectively utilizing the properties of AUC maximization and PU learning.
Moreover, we also proposed to use the recently proposed dynamic importance weighting framework (Fang et al., 2020; 2023) for AUC maximization with PU data on complex models and data.
As other reviewers acknowledged, we believe our method has sufficient technical novelty.
> The method assumes a specific type of distribution shift (covariate shift), which may limit its applicability in scenarios with other types of shifts.
As you mentioned, our paper assumes the covariate shift.
This is because the covariate shift is the most common and important shift in practice (He et al., 2023).
Note that unsupervised distribution shift adaptation methods require some assumption about the shift type since no supervision is available in the test distribution.
Since many papers at top conferences focus on the covariate shift only [b, c, d, e, f], we believe that our work has sufficient contributions.
[b] Test-time Adaptation for Regression by Subspace Alignment, ICLR'25
[c] Robust importance weighting for covariate shift, AISTATS'20
[d] Double-weighting for covariate shift adaptation, ICML'23
[e] Adapting to continuous covariate shift via online density ratio estimation, NeurIPS'23
[f] An information-theoretical approach to semi-supervised learning under covariate-shift, AISTATS'22
Meanwhile, we additionally evaluated our method under a class-prior shift, in which the class-prior changes, but the class-conditional density remains the same.
Table C in the anonymous URL shows the results.
Our method empirically worked well.
This would be because our method does not depend on class-priors and thus is relatively robust against the class-prior shift.
We will include these results. | Summary: This paper proposes a new method for AUC maximization under covariate shift using positive-unlabeled (PU) data from the training distribution and unlabeled data from the test distribution. Given the challenges of estimating class priors of the training and test distribution, this paper theoretically derives two estimators for maximizing AUC such that they do not require the estimation of class priors. The paper also proposes an approach for importance weighting estimation for the AUC risk estimators. Empirical experiments on image and tabular datasets show overall performance improvement over existing approaches.
Claims And Evidence: The main claim of the paper – that learning from PU training data and unlabeled test while adapting to covariate shift, using their AUC risk estimators improves on the state-of-the-art – is supported by theoretical proofs and experimental results.
Methods And Evaluation Criteria: Proposed method, the methods which are included for comparison, and evaluation on image and tabular benchmark datasets makes sense for this problem.
Theoretical Claims: Theoretical derivation of the two AUC risk estimators, approach to combine them, and approach to estimate importance weights seem to be correct. The steps described in the paper are detailed and make good use of the page limits.
Experimental Designs Or Analyses: Covariate shift is simulated using standardized approaches from existing literature. Paper takes appropriate steps to for fair comparison among different approaches. Statistical significance tests confirm performance improvement across different datasets and class priors.
Supplementary Material: The supplementary material describes limitations of assuming that no other type of shift (other than covariate shift) exists across training and test distributions. Hyperparameters for neural networks are discussed. Experimental results on tabular datasets, running times and full results with standard deviations are shown in the supplementary material.
Relation To Broader Scientific Literature: The key contributions of the paper build on existing approaches in this domain but highlight the importance of specific parts of their proposed approach (such as recommending joint estimation of importance weighting during classifier learning over two-step importance weighting) which allow this approach to outperform existing state-of-the-art models.
Essential References Not Discussed: The related works sufficiently describes relevant literature on this topic. The proposed method is compared against a large set of existing approaches published in recent years.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate your positive comments on our paper. | Summary: The paper aims to optimize the AUC under a covariate shift, i.e., when the test distribution of inputs differs from the training distribution. Considering the difficulty of collecting negative examples, this work focuses on the positive-unlabelled (PU) setting. To solve this problem, the paper first proposes two importance-weight-based risk estimators that depend only on PU training and unlabelled test data. A similar technique estimates the importance weights with a learnable model. Empirical studies on MNIST, FashionMNIST, SVHN, and CIFAR10 are conducted to validate the proposed method.
Claims And Evidence: The claims are overall clear and well-supported:
**Claim 1.** New setting: The paper proposes to optimize AUC under the covariate shift with PU data.
**Evidence:** Related work in Section 2 indicates that such a setting is different from previous work and reasonable.
**Claim 2.** New methods: Two estimators of the AUC risk are proposed, which can be used to learn a score function without class priors; a dynamic method for predicting importance weights is provided.
**Evidence:** According to Eq. (17) and Eq. (19), the proposed estimators are unbiased and do not require the class priors. Besides, Eq. (23) provides a loss to fit a bounded extension of the importance weights.
**Claim 3.** Superior experimental performance.
**Evidence:** Table 1 shows that the proposed method outperforms most of the competitors in various datasets.
Methods And Evaluation Criteria: The proposed method makes sense since it can optimize AUC using only PU data without class priors. However, the benchmark datasets could be further improved: only small-scale datasets (resolution $< 32 \times 32$ and about tens of thousands of images) are used to validate the effectiveness. The results would be more convincing if larger benchmarks were used, e.g., Tiny-ImageNet-LT or iNaturalist.
Theoretical Claims: I have checked and confirmed the correctness of the proofs.
Experimental Designs Or Analyses: The experiments could be further improved in the following aspects:
1. Both datasets and models (a few layers) are small-scale. It should be validated whether the proposed method is scalable to larger datasets or backbones (e.g., ResNet or DenseNet).
2. This work focuses on the covariate shift, i.e., **the input distributions are different** ($p\_{tr}(x) \neq p\_{te}(x)$). However, as described in lines 315~328, the training data and test data are sampled under different **class priors** (categories of the images). A sampling strategy based on the inputs (e.g., style, noise, color) instead of the classes would be more reasonable.
3. Details of the competitors (teAUC, trteAUC, UDAUC) are missing. To ensure fairness, how these methods utilize different data could be clarified.
4. Some results seem counter-intuitive. PAUC only achieves about 0.23~0.5 under most settings, but even a random guessing achieves 0.5 of AUC. Please provide more analyses and results, such as the AUC in the training distributions. Another issue is the effect of class prior $\pi$. As $\pi$ varies from 0.01 to 0.1, the tasks become more manageable with more positive examples, but the test AUC drops for most methods.
5. The hyperparameter $\alpha$ controls the trade-off between the upper bound and the approximation error in the importance weight estimation. Therefore, sensitive analysis on $\alpha$ should be provided.
6. According to Appendix D, the learning rate is set to $10^{-4}$ for all methods, which might be suboptimal for some competitors. A common strategy is searching for the best learning rate for each method.
Supplementary Material: I have reviewed the appendix, especially the proofs and the experiment details.
Relation To Broader Scientific Literature: The paper extends AUC optimization, PU learning, and covariate shift. These areas are well studied, but the paper is the first work to jointly consider these problems. The main idea follows the mainstream covariate shift methods and develops some novel techniques. It could be applied to cyber security and medical care.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: ### **Other Strengths**
1. Some real-world applications, such as cyber security and medical care, support the idea of jointly considering AUC optimization and covariate shift.
2. The presentation is clear and easy to understand overall.
### **Other Weaknesses**
1. The main concern is the soundness of the experiments (see Experimental Designs Or Analyses). This work would be more attractive if the experiment issues were addressed.
2. The organization could be further improved. For example, the main paper presents too many detailed proofs. Although it is clear for readers who spend more time reading this paper, it is hard to find the main conclusions at first glance.
Other Comments Or Suggestions: 1. Some equations are overwidth (e.g., Eq. (6), Eq. (7), Eq. (14)).
2. An abused notation $m$ is the index of unlabelled data before Eq. (20) and indicates the weights estimator in Eq. (22).
Questions For Authors: 1. A key to removing the positive-positive loss in Eq. (15) is using a symmetric function ($\sigma(z)+\sigma(-z)=1$). However, such a surrogate loss might suffer from a gradient vanishing problem if the predicted scores are near $0$ or $1$. Is it possible to practically use other surrogate losses, such as hinge or square loss?
2. How does the proposed method perform without a covariate shift? In this case, is the proposed method still comparable to previous PU & AUC optimization methods?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive and constructive comments.
We will revise the paper according to your suggestion regarding the structure and notation.
We added experimental results in the anonymous URL: https://anonymous.4open.science/r/icml25s-6C74/results.pdf
> the benchmark datasets could be further improved: only small-scale datasets are used to validate the effectiveness.
We performed the additional experiments on larger datasets.
Our method outperformed the others in Table A of the anonymous URL.
Please see the answer to the first comment of Reviewer Gocn for details.
> This work focuses on the covariate shift, i.e., the input distributions are different. However, as described in lines 315~328, the training data and test data are sampled under different class priors (categories of the images). A sampling strategy based on the inputs (e.g., style, noise, color) instead of the classes would be more reasonable.
Thank you for the constructive comment.
As described in Line 317, we created the covariate shift by using the sampling strategy of the AISTATS paper of covariate shift adaptation (Aminian et al., 2022).
Specifically, we first constructed positive and negative classes by partitioning the dataset's original classes into two groups.
Then, we created $p_{{\rm tr}} ({\bf x}) \neq p_{{\rm te}} ({\bf x})$ by reversing the high- and low-density input regions based on the original classes between the training and testing.
In our setting, training and test class-priors were the same ($\pi_{{\rm tr}} = \pi_{{\rm te}}$).
Also, we used the epsilon dataset in Appendix E.3, where the covariate shift was created based on the inputs (the distance between the feature vectors) as in (Sakai & Shimizu, 2019).
We would like to use the suggested strategy in future.
> Details of the competitors (teAUC, trteAUC, UDAUC) are missing.
As described in the second paragraph of Section 5.2, the loss functions of teAUC and trteAUC are equivalent to Eq. (19) and Eq. (20) with $w({\bf x})=1$ for all ${\bf x}$ (i.e., they do not use importance weights), respectively.
The loss function of UDAUC is a weighted sum of the loss of trAUC and the coral loss that is calculated from unlabeled training and test data.
The coral loss is used to minimize the feature discrepancy.
teAUC used positive training and unlabeled test data.
trteAUC and UDAUC used positive training, unlabeled training data, and unlabeled test data as in our method.
We will describe their specific loss functions in the revised paper.
> Some results seem counter-intuitive. PAUC only achieves about 0.23~0.5 under most settings.
Please see our response to Reviewer 9PVZ's comment '(2) Why do the PAUC results in Table 1 perform particularly poorly.'
> As $\pi$ varies from 0.01 to 0.1, the tasks become more manageable with more positive examples, but the test AUC drops for most methods.
AUC-based methods, except for PAUC, prefer small $\pi$.
This is because they essentially use loss functions of the form, $\mathbb{E} _{{\bf x} ^{{\rm p}} \sim p ^{{\rm p}}} \mathbb{E} _{{\bf x} \sim p} \left[ f({\bf x} ^{{\rm p}}, {\bf x} ) \right]$, where $p ^{{\rm p}}$ and $p$ are positive and marginal densities.
When $\pi$ is small, $p$ can be regarded as negative density $p ^{{\rm n}}$. In this case, the above loss function becomes the original AUC risk.
Thus, these methods work well with small $\pi$.
Since PAUC has a different form of loss, its trend was different.
We will clarify this.
> sensitive analysis on $\alpha$ should be provided.
We have performed the sensitive analysis on $\alpha$ in Appendix E.2.
Although the tendency of the results varied across datasets, our method with $\alpha = 0.5$ worked well.
> the learning rate is set to $10^{-4}$ for all methods, which might be suboptimal for some competitors.
In our preliminary experiments, we tested learning rates of $10^{-3}$ and $10^{-4}$ and confirmed no significant difference in performance. Based on this, we used $10^{-4}$ in this paper. We will mention this.
> A key to removing the positive-positive loss in Eq. (15) is using a symmetric function... Is it possible to practically use other surrogate losses, such as hinge or square loss?
Yes, we can use non-symmetric losses, although the second term in Eq. (14) is not a constant as you mentioned.
In this case, $\pi_{{\rm tr}}$ is necessary for training, but it can be estimated from PU data (Kumagai et al., 2024).
Since the sigmoid function worked well in many PU learning and AUC maximization works, we also used it. We will discuss this in the revised paper.
> How does the proposed method perform without a covariate shift?
We additionally evaluated our method when there were no shifts.
Table B of the anonymous URL shows the results.
Since there were no shifts, we compared trPU and trAUC, which do not consider shifts.
Our method and trAUC showed comparable results, indicating that our method robustly works well when no shift exists.
We will include this result.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed reply and additional experiments! Most concerns are well-addressed, but the experiments could be further improved in the following perspectives:
1. **Larger datasets:** I agree that the updated datasets have more images, and the performance is consistent with previous results. However, the resolution of these datasets is still low ($\le 32\times 32$). In real-world scenarios, the resolution is generally larger than $224 \times 224$ for most images, so the effectiveness of the proposed method should be verified on images with larger resolutions.
2. **Other model architecture:** Perhaps due to my imprecise description, the author missed the transferability issue of the proposed method to other model structures. Deep models used in most applications are much more complex than the MLPs with a few layers. In addition, for images with larger resolutions, the performance and efficiency of MLPs will be significantly reduced, so it is necessary to verify the effectiveness with other architectures.
3. **The learning rates:** I agree that the Adam optimizer might be insensitive towards the learning rate for some methods. However, according to our experience, some losses require a significantly different learning rate to achieve their best performance. It is highly suggested to test other learning rates from $10^{-6}$ to $10^{-1}$.
Based on the above considerations, I prefer to keep the original rating due to the experimental issues. If the authors only have limited GPUs, conducting these experiments might take a few days. Therefore, it is understandable that these issues cannot be fully addressed at the discussion stage, but the reviewer still expects the authors to fix these deficiencies as much as possible.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your reply and questions.
>3: The learning rates: I agree that the Adam optimizer might be insensitive towards the learning rate for some methods. However, according to our experience, some losses require a significantly different learning rate to achieve their best performance. It is highly suggested to test other learning rates from $10^{-6}$ to $10^{-1}$.
Thank you for the valuable suggestion.
We agree on the importance of the learning rate.
Thus, we have investigated the performance obtained by varying the learning rate within $\\{10^{-6}, \dots, 10^{-1} \\}$.
The average test AUCs over different class-prior $\pi$ within $\\{ 0.01, 0.05, 0.1 \\}$ of our method and trAUC, which is the most basic baseline, are as follows:
| Method | Dataset | $10^{-6}$ | $10^{-5}$ | $10^{-4}$ | $10^{-3}$ | $10^{-2}$ | $10^{-1}$ |
|--------|----------|------|------|------|------|------|------|
| Ours | MNIST | 0.709 | 0.760 | 0.804 | 0.806 | 0.796 | 0.738 |
| Ours | FMNIST | 0.854 | 0.914 | 0.907 | 0.873 | 0.874 | 0.789 |
| Ours | SVHN | 0.507 | 0.532 | 0.689 | 0.682 | 0.669 | 0.564 |
| Ours | CIFAR10 | 0.810 | 0.897 | 0.886 | 0.884 | 0.789 | 0.644 |
| trAUC | MNIST | 0.693 | 0.756 | 0.787 | 0.785 | 0.779 | 0.784 |
| trAUC | FMNIST | 0.846 | 0.916 | 0.909 | 0.895 | 0.891 | 0.876 |
| trAUC | SVHN | 0.500 | 0.503 | 0.547 | 0.553 | 0.537 | 0.533 |
| trAUC | CIFAR10 | 0.761 | 0.879 | 0.874 | 0.870 | 0.865 | 0.738 |
As observed, the $10^{-4}$ value used in our paper consistently shows good performance across all datasets.
We would like to include this result in the revised paper.
> 1 and 2: Larger datasets and other model architecture.
Thank you for the important suggestion.
We are currently conducting experiments using the Food101 dataset [a] and the ResNet-18 model, which allows us to evaluate our method with a larger dataset and model. As you have already recognized, due to our limited computational resources and time constraints, we have only been able to complete a portion of the experiments. Therefore, we apologize that this is a report of our current progress.
The Food101 dataset consists of image data from 101 food categories and is widely used for image classification tasks.
The maximum length of each image is 512 pixels.
We resized each image to $224 \times 224$ pixels.
To create a binary classification problem, we divided the original 101 categories into sweets-related (positive) and main dish-related (negative) classes. Then, following the procedure described in Section 5.1, we split the original categories within each positive/negative into two groups, assigning the first half with smaller class indices to the first group and the remaining to the second group. We then created the covariate shift by using the group ratio of 9:1 for the training and 1:9 for the testing. Details of the group assignments will be included in the revised paper.
We used 2,500 (200) positive and 25,000 (2,000) unlabeled training data and 25,000 (2,000) unlabeled test data for training (validation). We used 3,000 test data for evaluation.
As for ResNet-18, we did not use pre-trained weights to purely investigate the performance with the given data.
We set the learning rate of Adam to $10^{-4}$ based on some empirical tuning.
The average test AUCs over four trials are as follows:
| $\pi$ | Ours | trAUC | teAUC | trteAUC | UDAUC |
|-------|------|-------|-------|---------|---------|
| 0.01 | 0.590 | 0.585 | 0.601 | 0.585 | 0.577 |
| 0.1 | 0.585 | 0.578 | 0.554 | 0.578 | 0.590 |
| avg | 0.588 | 0.582 | 0.578 | 0.582 | 0.584 |
Here, we compared trAUC, teAUC, trteAUC, and UDAUC because they showed good results on the other datasets in Table 1 and are AUC-based methods like ours, which makes it easier to know the effect of the importance weighting.
Our method performed slightly better than these methods on average.
The performance may be further improved through more meticulous hyperparameter tuning.
We also want to include the results of the other methods in the revised paper.
Finally, thank you again for your helpful feedback. It helped us clarify our method's characteristics and improve the paper's quality.
[a] Food-101 -- Mining Discriminative Components with Random Forests, ECCV'14 | null | null | null | null | null | null |
Fixed-Confidence Multiple Change Point Identification under Bandit Feedback | Accept (poster) | Summary: This paper investigates the problem of identifying multiple change points in a piecewise constant function under bandit feedback, ensuring a fixed level of confidence in the results.
The authors consider two scenarios: 1) known number of $N$ change points, and 2) unknown number of change points ($m \geq N$).
They establish instance-dependent lower bounds on the sample complexity of identifying change points.
Building on this, they design a computationally efficient algorithm inspired by Track-and-Stop, which achieves asymptotic optimality.
---
### **Update After Rebuttal**
The authors have successfully addressed my concerns, so I will maintain my positive score.
Claims And Evidence: The overall claims in the paper are clear and well-supported by both theoretical analysis and empirical results.
Methods And Evaluation Criteria: Overall, the proposed methods make sense for the problem setting.
Theoretical Claims: The theoretical claims are strong and well-reasoned. While I didn’t go through every detail of the proofs, they seem correct.
Experimental Designs Or Analyses: The experimental designs are sound and sufficiently support the theoretical results.
Supplementary Material: I have primarily reviewed the proofs for the main theorems, along with parts of other sections.
Relation To Broader Scientific Literature: As discussed in Section 7, I agree that identifying all change points whose magnitudes exceed a threshold $\epsilon >0$ is a more practical and interesting problem setting. I hope that the methods and insights from this paper can also contribute to addressing this more practical setting.
Essential References Not Discussed: I think the paper covers the related work quite thoroughly.
Other Strengths And Weaknesses: **Other Strengths:**
1. The proposed algorithms are computationally efficient.
2. The empirical results clearly strengthen the paper.
**Other Weaknesses:**
1. In the setting where the number of changes $m$ is unknown ($N \leq m \leq K$) and the original action space is continuous, the practical choice of $K$ is unclear. Specifically, if the true number of changes $m$ is unknown, it is impossible to confidently select an appropriate number of actions $K$ since $K$ must be at least $m$. This ambiguity limits the practicality of the algorithm, as one cannot set $K$ without prior knowledge of $m$.
Other Comments Or Suggestions: No other comments.
Questions For Authors: Regarding the "other weakness," can you provide any practical method to address the issue of determining $K$ (the number of discretization points) when $m$ is unknown?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review of our work. We appreciate that you found the claims in the paper clear, that the theoretical claims were strong and well-reasoned, and that the experimental results were supportive of our theoretical contributions. We also are hopeful that the proposed methods will contribute to addressing additional interesting problems, as you point out. We respond to some of your comments below.
---
**Extension to the Continuous Setting:**
If the action space were continuous, then we cannot expect to identify a change point exactly. In this case, it would be reasonable to introduce some acceptable ‘tolerance’ $\eta$ in error between our identified change points and their true locations. This would naturally lead to a uniform discretization with gaps of size $2\eta$ meaning that there are $K = \lceil 1/2\eta \rceil$ arms (note that this is similar to the setting considered by Lazzaro & Pike-Burke, 2025). The tolerance level $\eta$ can either be set using domain-specific knowledge of an acceptable tolerance; or $\eta$ can be set by assuming that there is some minimum distance between the change points. This is reasonable as we cannot expect to detect changes if they are arbitrarily close to each other.
It would be interesting to consider approaches more focused on this continuous analogue of our problem, such as adaptive discretisation/Zooming which are popular in continuous action spaces [1]. However, this would **still** need some reason/knowledge or domain-specific guidance on when to stop for this fixed confidence setting. This is because there could be changes in mean that occur over an arbitrarily small region of the action space (i.e. changes could occur arbitrarily close to each other). Hence there will be some change points that we may inevitably miss if we want to stop in finite time in general. This is normally avoided in regret-minimisation settings by assuming that the mean reward function is sufficiently smooth across the action space [1,2]. However, in our abruptly-changing setting there are no such smoothness assumptions. Therefore, to stop in finite time in our setting, we would need to make an assumption on the distance between adjacent change points (mentioned above) or a restriction on the number of change points (‘m’, which you point out).
We will include more discussion of the extension to continuous action spaces in the camera ready version. Thank you for the suggestion.
---
**References:**
[1] Kleinberg, Robert, Aleksandrs Slivkins, and Eli Upfal. "Bandits and experts in metric spaces." Journal of the ACM, 2019
[2] Bubeck, Sébastien, et al. "X-Armed Bandits." Journal of Machine Learning Research, 2011. | Summary: Goal is to identify with high confidence the change points in the mean reward of the actions across the action space, by using as few samples as possible. A new Track-and-Stop algorithm is proposed for this task with asymptotic optimality. Matching lower-bounds are also proven.
-- update after rebuttal --
The authors' clarification of the novelty of their work wrt Gairiver & Kaufmann 2016 is convincing and helpful. I decide to keep my score.
Claims And Evidence: The paper is well-written, with matching upper and lower bounds presented for both the single changepoint and multiple changepoints settings.
Methods And Evaluation Criteria: Simulation result shows that the stopping time of the proposed algorithm follows the lower bound pretty closely.
Theoretical Claims: I only checked the theoretical claims intuitively, but didn't check the proofs in detail.
line 318-323: is the coupled effect helpful or unhelpful? it’s unclear from the writing here. What I find counterintuitive is that you say that it’s harder to obtain closed-form expressions for $\alpha^*$ when N is known than when N is unknown (where you obtained (17)) Why is this the case?
Experimental Designs Or Analyses: Can you compare the performance of Algorithm 2 with the alternative approach described in lines 415-417? This would be an informative result to have.
Supplementary Material: I skimmed through Appendix D.
Relation To Broader Scientific Literature: An interesting work for the bandits community and changepoint detection community.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Presentation is clear, and I particularly appreciate how the authors always devote a sentence or two after each theoretical claim to explain its significance and implications.
I think the distinction between the present setting and the non-stationary bandits setting described in lines 58-62 (right) should probably be moved to earlier on. This is important.
Other Comments Or Suggestions: line 272 (right): it might be clearer to refer to Theorem 4.1 instead of Corollary 4.3
nitpick: line 138 (right) it’s clearer to first define Exact-(N, $\delta$) and then substitute in N=1
nitpick: can combine (11) into Prop 4.4 to simplify (7) and stay consistent with line 3 of Algorithm 1
Questions For Authors: Addressing the following questions would enhance my understanding of the presented results, and I will update my evaluation accordingly:
1. Garivier & Kaufmann 2016 is cited quite a few times in the paper, could you elaborate the novelty of the present work as compared to GK2016? In particular, it seems quite a few results in this paper (e.g. Theorem 4.22, D-tracking) are specialised form of results from GK2016.
2. line 318-323: is the coupled effect helpful or unhelpful? it’s unclear from the writing here. What I find counterintuitive is that you say that it’s harder to obtain closed-form expressions for $\alpha^*$ when N is known than when N is unknown (where you obtained (17)) Why is this the case?
3. Can you compare the performance of Algorithm 2 with the alternative approach described on line 415-417?
4. eq (5): how do you initialise?
5. line 237 (right): can you elaborate how Z(t) relates to the CUSUM statistic, by giving exact expressions? I'm not suggesting to add this to the paper, but just to help me appreciate your results better.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the detailed review and for your constructive questions. We are glad that you found the paper well-written and that you think our work would be interesting for the bandits/change point community. We respond to your comments below.
---
**Other Strengths and Weaknesses:**
We will include further discussion earlier in the paper on the distinction between our setting and non-stationary bandits, emphasising the novelty of our work. Thank you for the suggestion.
---
**Other Comments Or Suggestions:**
Thank you for pointing these out, we will update the paper accordingly.
---
**Questions For Authors:**
1. (Gairiver & Kaufmann, 2016) is a seminal work and their tracking ideas have motivated the class of track-and-stop strategies and theory for a wide variety of works, including our own. This is because the lower bounds they construct for best arm identification are also applicable to a wide variety of other pure exploration bandits problems (Ch33.2, Lattimore & Sepesvari, 2020). However, the bounds they present are in implicit form (i.e. as the solution of an optimisation problem). Hence, their tracking methods are computationally expensive and somewhat unintuitive. In our work we are able to build off the bounds from (Garivier & Kaufmann, 2016) to provide intuitive instance-dependent lower bounds which can then, in turn, be used to define computationally efficient and intuitive tracking procedures for our setting. We will include additional discussion in the paper to clarify the novelty of our results.
2. Having the additional knowledge of the true number of change points in the environment makes the learning problem easier than when the learner does not know the exact true number of change points. Intuitively, when the number of change points is known and we have some information about a subset of them, this could be informative regarding the location of the other change points. For example, suppose we knew there are exactly three change points $x_1 < x_2< x_3$ in an environment and suppose a learner has sampled around $x_1$ and $x_3$ sufficiently such that they are confident of their respective locations. These samples would then also be informative about the location of $x_2$ and will help identify $x_2$ with fewer samples, since we know there is only one change point remaining. Incorporating this information means that the optimal proportion of actions we should play near one change point no longer just depends on that change point, but also on adjacent change points. Because of this, the optimization problem in equation (14) becomes more complicated and finding a general closed form solution becomes more challenging. However, Theorem 5.2 tells us that the cost in sample complexity of not knowing the true number of change points is at most a factor of 2 asymptotically. Moreover, even when the number of change points is known, we can run the same computationally efficient algorithm, MCPI. This algorithm avoids solving the complex optimization problem and obtains sample complexity within a factor of 2 of the optimal (non-closed form) rate asymptotically. We will improve the writing of this section of the paper to clarify this and include additional discussion, thank you for highlighting this.
3. We believe that the theoretical (asymptotic) performance of MCPI (Algorithm 2) and the alternative approach we mentioned would be similar. However we re-emphasise that MCPI will quickly identify very large changes when they are easy to identify, whereas the alternative approach will identify all change points at the same rate due to the tracking methods used. More importantly, we believe Algorithm 2 is much easier to extend to the case where we have no knowledge about the number of change points as different/new change points can be approached sequentially. We will clarify this in the paper.
4. We only update the $\hat{\mu}$ values after playing each of the actions once in both algorithm 1 and 2. Hence, the change point estimate will be initialised only after the initial set of observations have been made across the whole action space. We will update the paper to discuss initialisation, thank you for pointing this out.
5. Define $\hat{\mu}_ {i:j}$ and $T_ {i:j}$ as the empirical means and the number of samples made from arms i to j inclusive, respectively. Then the CUSUM statistic for exactly one change point can be written (Eq 3, Verzelen et al., 2020) as $C_j = [\hat{\mu}_ {1:j} - \hat{\mu}_ {j+1:K}] \sqrt{T_ {1:j}T_ {j+1:K}/T_ {1:K}}$. If we square this $C_j$ statistic, we see that its form is very similar to (11) except we consider only the samples adjacent to the change point rather than all actions to the left and right of the change point. This exclusion is important for settings in which the true number of changes is unknown, as considered later in our paper. We will add further discussion of this to the appendix.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response. Their clarification of the novelty of their work wrt Gairiver & Kaufmann 2016 is convincing and helpful. I decide to keep my score. | Summary: This work introduces a fixed-confidence piecewise constant bandit problem, and provides instance-dependent lower bounds for the complexity of change point identification in this problem. This work also devises a computationally efficient algorithm as a variant of track-and-stop and prove its asymptotic optimality.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: partially, the theoretical claims seem to be correct
Experimental Designs Or Analyses: yes, this work uses a simple and straightforward experimental design, but I have some questions for the setup, especially for ordering K arms, see "Other Comments Or Suggestions" below.
Supplementary Material: only part Appendix A
Relation To Broader Scientific Literature: The key contributions of this paper is related to multi-arm bandit problems, but I think the scope might be a bit narrow --- there is no concrete application examples provided to justify the setup, also the change-point defined in this work is a bit confusing since it requires a perfect ordering of the arm indexes. And the work is less related to change-point detection literature which instead focuses more on temporal changes.
Essential References Not Discussed: I am only familiar with a few key papers in this area, and I am not aware of other works studying the same problem, so to the best of my knowledge, no.
Other Strengths And Weaknesses: strengths: a novel problem for identifying change-point within arms, and theoretical lower bounds on sample complexity are provided; moreover, the practical algorithms that can achieve asymptotic optimality are also proposed.
weakness: the motivation for the problem setting considered is a bit vague, and it is not clear whether such change-point identification can lead to some downstream task performance improvements such as reward maximization, etc. Also it seems to be a strict assumption (or need certain prior knowledge) that the ordering of the arm indexes satisfies the N change-point assumption/definition in this work.
Other Comments Or Suggestions: For the scenario with one or multiple change-points, the formulation presented in equations on lines 120 and 129 assumes a "correct" ordering of K arms, such that the change-points are defined in terms of the arm index j∈{1,…,K}. Could the authors justify why such an ordering always exists or can be assumed in practice? For example, without such pre-knowledge of the correct ordering {1,1,1,2,2,2}, I could perhaps order the arms to be {1,2,1,2,1,2} in their mean reward, and does the proposed method still work in such random ordering?
To elaborate further on this point, a change-point refers to a specific time index at which the distribution changes (as commonly seen in non-stationary, piecewise constant bandit problems). However, in this work, the change-point refers to a spatial index. A key open question is thus how one should define an appropriate spatial ordering to preserve a "piecewise constant" structure for the mean reward function. Providing a more concrete practical example would help clarify this setup. In general multi-arm bandit problems, each arm may have a different mean reward, and even after grouping similar rewards together, these arms may not necessarily be neighbors in the given indexing. For instance, the example provided in Appendix A assumes a very restrictive scenario: without prior knowledge of reward means, the 19 arms are already ordered in a way resulting in exactly N=2 change-points (yielding the sequence 2,...,2, 4,...,4, 0,...,0). A random ordering of these arms, however, would produce significantly more change-points according to the definition used in this paper.
Moreover, how does the problem considered differ from the best-arm identification problem in multi-arm bandits when there is only one change-point (or equivalently, when there are only two possible reward distributions: low and high)? A comparison of sample complexity between these two settings would be helpful. Additionally, since this work primarily focuses on the expected sample size (stopping time) required to accurately identify the “change-point,” it implicitly serves as an indirect approach to eventually maximizing the reward. This aspect naturally broadens the scope of the proposed algorithm. Thus, I am curious about how the eventual reward compares to traditional algorithms, such as UCB-type approaches. It would greatly clarify the practical utility of the proposed approach if the author(s) could briefly discuss this comparison.
In equations (7) and (11), the notation $\hat\Delta$ appears but is not defined.
Questions For Authors: see above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed, constructive comments and we are glad that you found the problem novel. We provide some responses to your comments below.
---
**Related Literature:**
You are correct that most of the literature on bandits with change points is focused on non-stationary environments which change over time and these exploit online change point analysis methods. One of the novelties of our work is that we instead consider a discontinuity in the action space and assume that the mean rewards are stationary across time. This is motivated by several real world problems (see below). This was discussed in Section 2, however we will include further discussion earlier in the paper to avoid confusion and emphasise the novelty of our work. Thank you for the suggestion.
---
**Order of the arms:**
Our goal in this paper is to develop algorithms for settings in which there is structure in the mean rewards across the action space. This is motivated from experimental settings in which the order of arms have a spatial or experimental meaning. For example, in material development we may want to identify under what experimental conditions new materials abruptly change between different physical behaviours (Park et al., 2021). Hence, we could test the behaviour of a new material at different pressure levels. There is a natural ordering of pressure levels. Moreover, we expect similar behaviour from the material as we increase the pressure until we observe abrupt changes in its behaviour. To find the location of the changes it is necessary to consider the fixed natural ordering of pressure levels. In this setting, and many others, it would not make sense to randomize the order of the arms as this would distort the meaning of the problem.
Section 1 discusses several other real world settings where there is a change point in an action space with fixed ordering. We also point out that there are many other well-studied bandit problems which consider fixed orderings of arms, for example montone bandits [1] and continuum armed bandit problems such as Lipchitz bandits [2].
We finally note that the problem where we do not care about arm ordering and aim to group arms into clusters with the same reward is known as Clustering Bandits (Yang et al., 2022). As discussed in Section 2, this is a different problem to the one we study, and we show that exploiting the structure present in our problem can lead to improved performance.
We hope this addressed your main concern and you may consider raising your score.
---
**Complexity of our setting vs best arm identification:**
As suggested, suppose there are K arms ($\sigma^2$-Gaussian) with mean rewards $(0,...,0,\Delta)$ such that there is a unique ‘best-arm’ and one change point. From [Garivier & Kaufmann, 2016] the complexity of the best-arm identification problem will asymptotically be of order $K\frac{\sigma^2}{ \Delta^2} \log(1/\delta)$ and from our Corollary 4.3 the change point identification problem has order $\frac{\sigma^2}{ \Delta^2} \log(1/\delta)$. We no longer have a linear dependence on K since we can use our piecewise constant structure and information from across our action space to confidently locate the change in mean, whereas in best arm identification we have to sample each of the K arms sufficiently to confidently identify the arm with the highest mean reward. We will include further discussion of this in the paper, thank you for the suggestion.
---
**Regret Extension:**
While the pure exploration set-up of this paper and the regret-minimisation set-up in other works are distinct, it would be interesting to consider extensions of our setting to the latter. As you allude to, an outcome of our learning methods could be the identification of regions in the action space with high mean reward. However, our current methods are designed to specifically locate the change points in piecewise constant structures, regardless of the adjacent rewards. Therefore modifications to our methods and analysis would be necessary in order to focus on regret minimisation. Hence, for now we leave these ideas for interesting future work and will include a discussion of this in the paper, thank you for the suggestion.
---
**Definition:**
Indeed $\hat{\Delta}$ was defined later in line 1011. This should have been included earlier and this will be rectified, thank you for bringing this to our attention.
---
**References:**
[1] Aziz, Maryam, Emilie Kaufmann, and Marie-Karelle Riviere. "On multi-armed bandit designs for dose-finding trials." JMLR, 2021
[2] Magureanu, Stefan, Richard Combes, and Alexandre Proutiere. "Lipschitz bandits: Regret lower bound and optimal algorithms." COLT, 2014. | null | null | null | null | null | null | null | null |
Laplace Transform Based Low-Complexity Learning of Continuous Markov Semigroups | Accept (poster) | Summary: The paper presents theoretical developments to learning the infinitesimal generators of markov semigroups using Laplace transform.
Claims And Evidence: The paper's claims all theoretically substantiated. Empirical evidence backs the claims, but the experiments are of very small scale.
Methods And Evaluation Criteria: Ok
Theoretical Claims: I read the entire paper carefully, and understood some. I see no issues with correctness.
Experimental Designs Or Analyses: The experiments are small scale, but seem reasonable.
Supplementary Material: Skimmed through.
Relation To Broader Scientific Literature: The work is positioned sufficiently to its specific domain, but the broader picture is lacking.
Essential References Not Discussed: No issues.
Other Strengths And Weaknesses: S: The paper seems to present a strong theoretical advancement to operator learning
S: The theoretical presentation is convincing.
S: The empirical results support the contributions.
W: The presentation lacks exposition and the higher-level story gets drowned in all the technicalities. It was difficult to follow what was happening, and why. The technical presentation is dense and convoluted.
W: The results are consistent with the contributions, but they have small scale and are thus anecdotal. The method is only compared to one other method. The results show that one can learn the first two eigenfunctions from SDE dynamics. It is not demonstrated how is this beneficial, or what problem this solves. Why do we need these eigenfunctions? There are no benchmarks or result tables. I feel like the paper is not really realising its own potential, or had no room left for showing how the contributions lead to improved methods and results.
Other Comments Or Suggestions: I want to start by stating that I’m not a theoretician or expert in the paper’s domain, but have knowledge on many subtopics of this paper. Hopefully my review can still be useful in showing how the manuscript opens up to an interested outsider.
The paper is very theoretical and technical with abundance of math. The paper does little to make the material accessible to wider audience. The presentation is technically excellent, although some parts show some looseness. I was able to follow maybe third of the material, and rest was too deep or cursory to be understood (which is ok).
However, one should be able to follow the higher-level story regardless: what problem are we solving, why math is introduced, and what does the math achieve. The paper is not particularly clear on these aspects, and there is insufficient motivation and introduction to the theoretical presentation to be even following the higher-level story. There is a lot of math’itis where math happens but it’s not clear why. The material is also not clearly contextualised into wider literature. The paper is presenting probably too much material (experiments start at last page), and the manuscript would have improved by prioritising some of the material (and some to appendix) to allow for room to breath.
I'm rating the paper weak accept due to its strong theoretical contributions, although the presentation has lots of room for improvement. I'm looking forward to hearing author comments and revise my review accordingly.
Questions:
- Is TO a standard wording for this concept in the literature? How does this relate to pushforwards, or to FNO’s or to operator learning?
- What’s the connection of IG to Fokker-Planck or to Ito’s lemma? Eq 7 looks like an expected Ito. Can you contextualise this a bit more?
- The scope of the work is abstract: what task are you solving when you "learn markovian semigroups"? I think this paper is about learning operators, especially in SDE context, but what problems can we then solve by these operators or by learning semigroups? Are we learning the solution operators of PDEs, or something more general?
- What kind of observations do we need?
- The open problem is quite vague. The introduction states that unboundedness of IGs makes designing “estimators” “challenging”. What specific problems does the unboundedness pose? Why does it make things challenging? It seems that earlier methods can learn PDE operators just fine: how are they deficient?
- In stochastic processes we usually operate within some boundary constraint, or within some bounded domain. Is the unboundedness of IG then a problem in those contexts? Can you give a bit more explanation what the unboundedness of IG means?
Questions For Authors: - Why the $p_t$ has two arguments? What is $y$? Should this be perhaps interpreted as $p_{t,s}(X_t=x, X_s=y)$ for $t > s$?
- Is A_t a pushforward? Can you elaborate a bit more on what this means conceptually, and how it connects with IG
- What does A^* mean?
- The paper claims to study the transfer operators on a space of invariant measures. But in that space the TO seemingly does nothing: what is here to study?
- $f$ is first just a functional, and then later it’s from the space of invariant measures. Can you explain if these are two different $f$’s?
- In sec 2 L was an operator that turns functional f(x0) into a derivative of expected functional E[f(xt) | x0]. In sec 3 L is a mapping between two invariant measures. Are these two descriptions of L complementary, or did we just define a new, different L in sec 3?
- I don’t understand why the H and \calL have different metric structures. Surely if f,g are in both H and L, their inner products should be the same. Can you elaborate?
- Why can you assume that the RKHS is a subset of calL? The connection between the RKHS and stationary measures is not clear here. Does the kernel span only invariant measure functions? If so, how can you know such a kernel exists or how is it defined?
- It would be useful to expand example 2.2. to also describe it wrt A/f/L/etc
- I’m a bit confused of the setting of sec 3 first paragraph. We introduce a bunch fo stuff, but there is little motivation or introduction. For instance, it’s not clear what the \calL is, why we want the L, and why H is a subset of calL, and why we introduced kernels. All of this happens, but it's purpose is not described: what is the task/problem/context?
- Eq 9 has phi, but usually that is unknown. How do we handle this?
- In eq 9 we draw X0 from pi. But pi is a invariant distribution, which is where the process converges at some (late) time. Surely it’s not the initial time: so why do we then write X0?
- The eq 9 has a H-norm. But the square error seems to be over just scalar evaluations of measures: why do we need the Hilbert norm for scalars?
- Eq 10 shows that some function of X0 is an integral of Xt’s. What does this mean conceptually? Is this perhaps the solution operator in the hilbert space?
- I struggle to follow eqs 9 and 10. I don’t see much connection to earlier material since notation changes from A/f/L/S to phi/psi/pi. Can you elaborate a bit more here?
- The eq 10 has the exp form for the solution of the process. Is this a limitation? Does this apply to all kinds of SDEs or semigroups?
- Is eq 11 a quadrature?
- What are A_t_j? Where do these come from, and are they known?
- What is h or v?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful feedback. Due to space constraints, we address the key issues and questions briefly, committing to incorporate all suggestions in our revision. Further details are available upon request.
# Empirical evidence:
For our additional [results](https://green-chantal-92.tiiny.site), please see reply to __nvbC__ .
# Presentation
Following your suggestions, we'll emphasize motivations in Sec 3, address unboundedness effects, expand the app. on kernel-based operator learning for TOs and IGs, and add a notation table. If accepted, we'll use the extra page to enhance transitions between challenging concepts.
# Questions
- In stochastic process theory, TOs form the Markov semigroup associated with the process, acting as direct image functors in measurable spaces. When a measurable func pushes a measure forward, TO describes density transformations. Neural-network learning methods for TO relate to FNO but are distinct concepts, (Nakao and Mezic, 2020).
- TOs are linear (evolution) operators defined on func spaces. We can say that $A_t$ acts as the pushforward of a function $f$ under the transition probability defined by $p_t$, making them crucial for understanding $(X_t)\_t$ dynamics. On the other hand, IGs, defined via Eq 25, are linear differential operators related to the eqs of motion via backward Kolmogorov eq. $\partial_t \mathbb{E}[f(X_t)|X_0=\cdot ]=L f$ for observable $f$ and forward Kolmogorov eq. (Fokker-Planck) $\partial_t q_t =L^* q\_t$ for probability density $q_t$ of $X_t$ (* being adjoint). Indeed, Eq 7 is obtained by applying Itô's lemma to $f(X_t)$ where $X$ is the solution of Eq 6.
- Learning the spectral decomposition of TO/IG is equivalent to learning the solution of an associated SDE, so we can predict the evolution of distributions, see reply to __CrN8__. Working with IG allows us to do this reliably in continuous time.
- Aș explained in Sec 5, the data is the observed trajectory of states.
- Galerkin projection methods (empirical risk minimization) suffer from spectral pollution when applied to unbounded operators, (Kato 2012, Kostic et al. 2024). When the process evolves in a bounded domain $\mathcal X$, the IG are still typically unbounded, i.e. the IG doesn’t map all funcs $\mathcal{X}\to\mathbb{R}$ (observables of the process) in the $\mathcal{L}^2_\pi$ space to other funcs in that space - it’s only defined on a proper subset (domain of IG).
# Q. for Authors
[1] $(p_t)_t$ is a family of transition probability densities $p_t(x,y)dy=P(X_t \in dy|X_0=x)$ for all $x,y\in\mathcal{X}$. In this sense, $p_t(x,y)dy$ quantifies the probability that the process $(X_t)_t$ is in the infinitesimal set $dy$ at time $t$, given that it started at $x$.
[2-3] See above.
[4-5] While the action of TO can be defined for any measurable func $\mathcal{X}\to\mathbb{R}$, to characterize dynamics via semigroup, one needs the space of such funcs to be invariant under the action of TOs, and a typical choice is $\mathcal{L}_\pi^2$. Note that invariance of measure $\pi$ (both marginals of $P[X_t,X_0]$ are $\pi$), doesn't mean that TOs (conditional laws $P[X_t|X_0]$ ) are trivial.
[6] In fact, for all $t$, $A_t$ is an operator that turns a func $f$ into a func $A_t f$ s.t. for any $x\in\mathcal{X}$, $A_t f(x)=\mathbb{E}[ f(X_t )|X_0 =x]$. While $L$ is an operator that maps a func $f\in dom(L)=\mathcal{W}\_\pi^{1,2}\subset \mathcal{L}\_\pi^2$ to a func $Lf \in\mathcal{L}\_\pi^2$, meaning that $\int_{\mathcal{X}} |L f(x)|^2\pi(dx)<\infty$.
[7-8] In the RKHS theory it is standard to assume $\mathcal{H}\subset\mathcal{L}^2_{\pi}(X)$, $\pi$ being the probability of data samples (e.g. Gaussian kernel), and the main issue in the learning bounds is the difference between $\mathcal{H}$ (chosen) and $\mathcal{L}^2_{\pi}(X)$ (unknown) norms, e.g. (Steinwart&Christmann,2008).
[9] For conciseness, the defs of $A$, $L$, and $\pi$ in the Ornstein-Uhlenbeck case are in App A.1. If space permits we can include them in the main text.
[10-11] The funcs $\phi$ are not unknown but are provided when choosing the RKHS, e.g. $\phi(x)=e^{-\|x-\cdot\|^2/(2\sigma^2)}$ for Gaussian kernel.
[12-15] In Eqs 9-10, $X_0$ is not the SDE's initial cond. but any random variable with the invariant measure, see reply to Q1 of __PJzG__ for extra clarity. Note that $\phi(X_t),\psi(X_0), G^*\phi(X_0) \in \mathcal{H}$ are funcs, not scalars.
[14 & 16-17] $\mathbb{E} [\psi(X_0)|X_0=\cdot]:\mathcal{X}\to\mathcal{H}$ is the Riesz representation of lin. operator $R_\mu$ in RKHS $\mathcal{H}$, and our learning objective. It is the regression func related to the excess risk, see Prop A.1, where the risk is given via the target feature $\psi(X_0)$. Since we cannot observe it precisely (due to the integral form) in Eqs 11-12 we show how to approximate it via quadrature.
[18-19] $A_{t_j}$ is TO for time-delay $t_j$ (see Eq 2), $h$ was used as a generic func in RKHS $\mathcal{H}$ and $v$ as a vector in $\mathbb{R}^N$.
---
Rebuttal Comment 1.1:
Comment: I've read the rebuttal. I'm keeping my score. I don't recognise the paper demonstrating any practical impact from the theoretical treatment.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their feedback. In our rebuttal (due to the limited space) we have focused on answering all their questions on the mathematical framework we have studied and to clarifying potential misunderstanding of the theory. We hope that this was useful to the reviewer, as it was beneficial to us to improve the presentation. Since we couldn’t **provide a larger context and emphasize enough the broader impact** of our theoretical and empirical results, let us try to elaborate now.
Understanding and accurately learning the spectral decomposition of the stochastic Koopman operator for continuous-time stochastic dynamical systems is **pivotal to a number of machine learning (ML) and artificial intelligence (AI) applications**. This approach, as we briefly review, has significantly impacted diverse fields, including molecular dynamics, time-series clustering, computational neuroscience, and beyond.
- **Molecular Dynamics** The field of molecular dynamics has particularly benefited from spectral decomposition methods of Markov semigroups. Research on AI-augmented molecular dynamics using statistical mechanics demonstrates the importance of accurate spectral gap identification—the separation between slow and fast modes—in molecular simulations, see (Schütte et al. 2001). Work that we extend (Kostic et al. 2024) was used in Devergne et al (2024) to demonstrate the impact on **IG based methods in accelerating simulations and unlocking practical identification of meta-stable states**. They stress that, unlike popular TO that have limited utility, _“ [...] the infinitesimal generator is the adequate tool to deal with dynamical information from biased data”_,. Further, in Devergne et al (2025), published in the journal of Chemical Physics, authors conclude that IG method _“[...] offers the exciting possibility of making close contact with experiments.”_ Note that our experiments show equally good results like (Kostic et al. 2024) but at the order of magnitude faster performance, which is particularly important in **scaling up IG methods to larger proteins**.
- **Spectral Clustering.** Klus and Conrad (2023) introduced a Koopman-based spectral clustering method tailored for directed and time-evolving graphs. By leveraging TOs, their approach shows how to identify coherent sets within complex networks, enhancing the analysis of temporal data structures. Further, Cabannes and Bach (2024), the results on Galerkin projections of IG, which we have directly generalized and improved, stress that learning spectral decomposition of IG _“[...] opens exciting follow-ups for any spectral-based algorithms, could it be spectral clustering, spectral embeddings, or spectral distances.”_. Note that in **the additional experiment we demonstrate that in contrast to this method our approach is reliable** in learning spectral decomposition.
- **Computational Neuroscience.** Marrouch et al. (2020) applied data-driven Koopman operator techniques to analyze brain activity. Their work demonstrates the utility of Koopman spectral methods in capturing the spatiotemporal dynamics of neural signals, offering insights into brain function and potential applications in neurological disorder diagnostics. Further Ostrow et al. (2023) develop dynamical similarity analysis based on TOs that _“[...] opens the door to comparative analyses of the essential temporal structure of computation in neural circuits.”_, showing that TO based similarity metrics can distinguish learning rules in an unsupervised manner.
Collectively, these **studies underscore the broad impact spectral decomposition of the TO (stochastic Koopman operators) and their IG across various ML and AI domains**, and we strongly believe that our contribution to **theoretical understanding and methodological approach to learning IG’s spectral decomposition** is critical for building **reliable** methods in such scientific applications.
### **Additional Ref.**
- Schütte, C., Huisinga, W., & Deuflhard, P. (2001). Transfer operator approach to conformational dynamics in biomolecular systems. Springer Berlin Heidelberg.
- Devergne, T., Kostic, V., Pontil, M., Parrinello M. (2024) From biased to unbiased dynamics: An infinitesimal generator approach, NeurIPS 2024
- Devergne, T., Kostic, V., Parrinello M., Pontil, M. (2025) Slow dynamical modes from static averages. Journal of Chemical Physics. 162 (12)
- Klus, S. and Conrad, N. D., (2023) Koopman-based spectral clustering of directed and time-evolving graphs. Journal of nonlinear science 33.1.
- Marrouch, N., Slawinska, J., Giannakis, D., & Read, H. L. (2020) Data-driven Koopman operator approach for computational neuroscience. Annals of Mathematics and Artificial Intelligence, 88(11)
- Ostrow, M., Eisen, A., Kozachkov, L., & Fiete, I. (2023). Beyond geometry: Comparing the temporal structure of computation in neural circuits with dynamical similarity analysis. NeurIPS 2023 | Summary: The authors present an approach for learning continuous Markov semigroups. Notably their approach comes with theoretical guarantees at any time-lag. In addition, their approach scales linearly in the state dimension opening the door to apply their methods on high-dimensional problems. Finally they demonstrate their approach on two test problems.
Claims And Evidence: - *Novel approach for learning the spectral decomposition of the IG for Markov semigroups*: The proposed approach is novel as far as I'm aware. This is supported by a solid literature review and a detailed description of the approach. I personally found the algorithm blocks to be particularly helpful.
- *Statistical guarantees for their approach*: The authors prove tight error bounds for their approach which they validate with numerical studies.
- *Numerical stability for $\Delta t \to 0$*: One of the major claims the proposed approach is that it remains stable as $\Delta t \to 0$ (something which is not true for TO RRR). They show that this is indeed the case in the numerical study in Figure 1.
Methods And Evaluation Criteria: The authors demonstrate their approach on two problems:
- Estimating the eigen values of the generator for a 1D triple well problem.
- Estimating the eigen functions of the alanine dipetide in water with a dimension of 45. It wasn't clear to me what was expected for the ground truth of the first and second eigenfunctions.
While I think these problems do a decent job of sketching out some of the main ideas and motivations for learning the IG, some more impressive numerical studies would have definitely strengthened the paper.
Theoretical Claims: - Theorem 6.2 & 6.3: The sketch of the proof seems correct but I haven't gone through all the details carefully.
- The discussion on the comparison between bounds from [Kostic'23a] and [Kostic'24] was really helpful for understanding the motivation of the present work.
Experimental Designs Or Analyses: - To reiterate, it wasn't clear to me what to read from Figure 2. The authors mention that the eigen functions have different constant values in the expected metastable states but some estimate for the accuracy of the approach would have been useful.
Supplementary Material: I reviewed the attached appendix. I hope the authors intend to publish their code since I think this will help in understanding their approach.
Relation To Broader Scientific Literature: This work relates to the broad problem of learning models of Markov processes from data.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
- Your introduction and review of the relevant literature was really well-written and easy to follow. I appreciate the effort and care that went into this section and I think it will make your work more broadly accessible to a wider ML audience.
Weaknesses:
- While I think your approach is a strong contribution some sections of your write-up were challenging to parse. While some of this comes with the territory and the nature of the problem you are tackling, I think adding an appendix on notation / simplifying some of your notation would make your work more accessible. For example, I had to dig quite a bit to understand the meaning of $[\cdot]_r$ and $\otimes$.
Other Comments Or Suggestions: - under Section 3: "result holds true the minimizers of for the..."
Questions For Authors: - I'm wondering how much of challenge it might be to apply your approach for estimating latent dynamics (i.e. where not all relevant states are directly observed)?
- As a follow up to this question, might it be possible to apply your approach to estimate the generator for Markov-approximate fractional Brownian motion? [1]
- Can you discuss how you might apply your approach for forecasting?
[1] Daems, Rembert, et al. "Variational inference for SDEs driven by fractional noise." arXiv preprint arXiv:2310.12975 (2023).
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s insightful evaluation and valuable comments. Below, due to space limitations, we briefly address the highlighted weaknesses and respond to the reviewer’s questions, committing to incorporating all feedback in our revision. If needed, we can elaborate further on each point.
## Additional empirical evidence:
Following the reviewer's suggestion, we have expanded our empirical evaluation to include IG baselines and support the claims summarized in Table 1. For a brief discussion on the [results](https://green-chantal-92.tiiny.site) of our additional experiments, please see our reply to reviewer __nvbC__ .
## Presentation
To help the reader, we will include a table of major notations at the beginning of the appendix, and expand our appendix with more context on the kernel based operator learning and derivation of the algorithms. Further, if accepted, we will use the extra page to include smoother transitions between challenging concepts. Finally, concerning the discussion of the Alanine-Dipeptide experiment, since there is no ground truth for this system, the only thing we could show is that the result aligns with the state-of-the-art expert knowledge in molecular dynamics, (Wehmeyer & Noe, 2018). In the revised manuscript we will discuss more on the interpretation of this figure.
## Questions
We thank the reviewer for their questions, which suggest potentially interesting extensions of our framework.
- Partially observed systems: The major problem in partially observed systems, in the context of TO/IG kernel-based learning, lies in the fact that one might lose the universality of representation, and hence encounter the difficulty to properly predict distributions. One possible way to overcome this is to rely on Takens-type theorems (deterministic and/or stochastic) (Sauer et al. ”Embedology”, 1991; ) Koltai and Kunde, “A Koopman–Takens theorem”, 2024), that essentially guarantee, under certain assumptions, that by augmenting the current measurements with the past ones we decrease the loss of information and improve estimation.
- Fractional Brownian motion: Consider the SDE $dX_t = b(X_t) dt + \sigma(X_t) dB_{t}^H$ where $B^H =(B\_{t}^H )\_{t}$ is a fractional Brownian motion. When $H \neq 1/2$, the fractional Brownian motion (fBm) is no longer a semimartingale, which means Itô calculus does not apply directly. Although the equation can still be given a meaning—via the Stieltjes integral for $H > 1/2$ or the Skorokhod integral for $H \in (1/4,1/2)$—the resulting process is no longer Markovian. Our generator-based approach no longer applies in this setting. However, as you suggest, we could adapt our method to estimate the generator for a Markovian approximation associated with the autonomous version of the fractional SDE introduced in [1]: $dX_t = b_\theta (X_t) dt + \sigma_\theta (X_t) dB_{t}^H.$ This process can be approximated using a finite linear combination of Ornstein-Uhlenbeck processes, $\hat{B}^H(t)$, leading to the so-called Markov-Approximate fractional SDE (MA-fBMSDE): $dX_t = b_{\theta} (X_t) dt + \sigma_{\theta} (X_t) d\hat{B}\_{t}^H,$ where $d\hat{B}\_{t}^H = \sum_{k=1}^{K} \omega_k dY_t^k, \quad \text{with} \quad dY_t^k = -\gamma_k Y_t^k dt + dW_t.$ By applying Proposition 2, the process $X_t$ can be augmented with a finite number of Markov processes $Y_t^k$ (which approximate $B^H$), forming a higher-dimensional state variable: $Z_t = (X_t, Y_{t}^1, \dots, Y_{t}^K) \in \mathbb{R}^{D(K+1)}.$ The process $Z = (Z\_{t})\_{t}$ is Markovian and can be described by an ordinary SDE: $dZ_t = h_{\theta} (Z_t) dt + \Sigma_{\theta} (Z_t) dW_t.$ The infinitesimal generator is given by: $$\mathcal{L} f(z) =\left( b (x) - \sigma(x) \sum_{k=1}^{K} \omega_k \gamma_k y^k \right) \frac{\partial f}{\partial x} + \sum_{k=1}^{K} (-\gamma_k y^k) \frac{\partial f}{\partial y^k}+ \frac{1}{2} \sum_{i,j} (\Sigma_{\theta} \Sigma_{\theta}^\top)_{i,j} \frac{\partial^2 f}{\partial z_i \partial z_j}.$$ This satisfies our assumptions, allowing us to apply our procedure.
## Forecasting
Given IG’s spectral decomposition, Eq (8) provides directly the solution since it enables forecasting of full state distributions beyond just the mean (e.g., $f$ as an indicator function). Hence, we can use our method to forecast $\mathbb{E}[h(X_t)\vert X_0 = x] \approx \sum_{i\in[r]} e^{\hat \lambda_i} \langle \hat{g}\_{i}, h \rangle\_{\mathcal H} \hat{f}\_{i}(x)$, for $h\in\mathcal{H}$, noting that $\hat{g}\_{i}, h \rangle\_{\mathcal H}$ can be computed on the training set via kernel trick, see (Kostic et al 2022). Further, note that this formula extends to all $\mathcal{L}^2_\pi$ functions, at the price of an additional projection error, and that we can, hence, predict evolution of distributions, see e.g. (Klus et al., 2019) and (Kostic at al. "Consistent long-term forecasting of ergodic dynamical systems", 2024). If useful, we can include an empirical example in the main body or appendix. | Summary: The paper deals with learning continuous-time Markovian dynamics. While existing methods focus on learning transfer operators, here the authors suggest to learn a spectral decomposition of the semigroup's generator, under some assumptions. This is done by finding a (finite-rank) approximation of the resolvent in an internal RKHS, thus exploiting its appealing properties (e.g. boundness).
This approach is applicable to a relatively broad class of processes, and gives raise to accurate and efficient data-driven algorithms.
## update after rebuttal
The paper and the author's answers are convincing, and my recommendation to accept the paper remains.
Claims And Evidence: All claims made in the submission seem to be supported by a sound theoretical analysis and exact proofs (albeit naturally it was impossible to really get into details within the short review period).
The experimental section, however, does not show a clear case of superiority over existing methods. A comparison is made only on one example and only to one baseline, where even there the results are quite arguable.
Methods And Evaluation Criteria: Basically yes, but more numerical experiments are needed. The inset frame in Fig.1 is unclear to me.
Theoretical Claims: I briefly went over proof, though the details are too deep to cover during the short review period.
Experimental Designs Or Analyses: See my answer above.
Supplementary Material: I briefly went over titles and proofs.
Relation To Broader Scientific Literature: Compared to existing literature, this work extend the analysis to a broad class of dynamics, and find solutions to priorly known drawbacks (e.g. short sampling intervals). The key idea of studying the resolvent seems to be new and mind opening.
Essential References Not Discussed: The review of related work seems adequate.
Other Strengths And Weaknesses: The submission seems original and novel, with strong theoretical contributions. The empirical contribution, however is not established in my opinion.
Other Comments Or Suggestions: Although overall clarity of the paper is not bad, it is still very easy to get lost with all symbols and notations. It would be recommended to encapsulate notations and definitions into a single table (in the Appendix, maybe).
It is crucial to add extra experiments and comparisons with more baselines.
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s insightful evaluation and valuable comments. Below, due to space limitations, we briefly address the highlighted weaknesses and respond to the reviewer’s questions, committing to incorporating all feedback in our revision. If needed, we can elaborate further on each point.
## Additional empirical evidence (results can be found [here](https://green-chantal-92.tiiny.site) ):
While we still feel that our work is mainly theoretical, following the reviewer's suggestion, we have expanded our empirical evaluation to include IG baselines and support the claims summarized in Table 1. In particular:
### 1D Langevin dynamics experiment:
We extend the original experiment by comparing it to __two IG baselines__ that use prior knowledge of IG:
- **Galerkin projection estimate (Hou et al., 2023)**, and
- **Energy based Dirichlet form regression (Kostic et al., 2024)**.
In Figure 4 (see the above link) we plot true eigenvalues as black vertical lines, while the estimated eigenvalues are plotted as magenta (our dual method) and red (Dirichlet form regression) dashed lines. Further, since the Galerkin projection estimate has numerous spurious eigenvalues, we plot their empirical distribution in the form of histogram, stressing out that expert knowledge is needed for this estimator to extract good estimates from the spurious ones. Finally, we report that **our results are comparable to the SOTA** (Kostic et al., 2024) estimator, **despite not using the explicit IG knowledge**, **while being one order of magnitude faster to train**.
### Non-normal sectorial IG:
We further conduct an experiment using the 2D Ornstein-Uhlenbeck process with non-symmetric drift, estimating the leading nontrivial complex-conjugate eigenvalue pair. We compare our primal method based on random Fourier features with the corresponding TO, noting that **Kostic et al. (2024) is inapplicable** in this setting and that **Galerkin projection estimators (Hou et al., 2023) are inefficient**, since, as observed above, they typically result in over 50 eigenvalues in the zone of interest. In Figure 5 (see the link above) we show comparison for 10 random trials using 1000 random features, two different time discretizations $\Delta t=0.01$ and $\Delta t = 0.001$, and corresponding sample sizes $n=10^4$ and $n=10^5$. Note that, to obtain small errors, sample sizes are much higher than in the self-adjoint case (Langevin). This is due to non-normality of the generator, as predicted by our theoretical analysis, see Appendix C.5, where one can see that our method consistently has estimates in the tighter $\varepsilon$-pseudospectrum of the operator.
## Presentation
As suggested, we will include a table of major notations at the beginning of the appendix. | Summary: This paper studies a new class of non-parametric learning algorithms for continuous-time Markov processes, specifically for learning the eigenfunctions and eigenvalues of their infinitesimal generator (IG) of the semigroup of transfer operators (TO). While existing methods tend to focus on learning the TO which share the same eigenfunctions to IG, the authors criticize their deteriorating spectral gap as the sampling frequency of the data increases (i.e. as the time-lag decreases). Then for more recent methods that directly learn the IG and its eigenstructure, the authors point out that the challenges coming from the unboundedness of the IG is not well addressed. Thus, the goal of the paper is to propose a learning algorithm for IG that can properly handle its unboundedness. To this end, the authors propose to leverage an auxiliary operator, called the resolvent operator, which has the same eigenfunctions as the IG, and can be obtained through the Laplace transform of the TO. For a class of Markov processes with sectorial IG (this includes all time-reversal-invariant Markov processes and some important non-time-reversal processes), the resolvent is uniformly bounded outside a sector containing the spectrum, which addresses the unboundedness problem. The authors then extend an established prior method (reduced rank regression; RRR) for operator learning in RKHS to learn the resolvent operator, and provide statistical learning guarantees. A particularly new contribution is bounding the integration bias, i.e., the error of Laplace transform under possibly irregular sampling intervals, using spectral perturbation theory. The authors demonstrate the proposed algorithm in two time-reversal-invariant processes. The first experiment demonstrates that while TO learning is sensitive to small time-lag, the proposed method learning the resolvent is robust. The second demonstrates that the proposed method successfully recovers the leading two eigenfunctions of IGs from a molecular dynamics data, in a setup where a previous IG learning method is intractable due to the high dimensionality.
## update after rebuttal
The authors provided detailed clarifications in response to my concerns. I think this is a solid work in the domain of nonparametric learning of dynamical systems and would like to retain my supportive rating.
Claims And Evidence: Most of the claims are supported by theoretical or empirical evidence. Some claims listed below could benefit from additional clarification.
- In pages 2, 3, and Table 1, it is claimed that current methods that directly learn the IG are susceptible to spurious eigenvalues due to the unboundedness of the IG. This claim is not explicitly supported by theoretical results or experiments as far as I can confirm.
- As a related note to the above, while a proper empirical comparison against TO learning method is made in the first experiment, comparison against IG learning methods are only done theoretically, not empirically.
- In page 3, 8, and Table 1, the authors claim that the proposed method is structure-agnostic and apply to the broad class of sectorial IGs which covers not only self-adjoint IGs but also important non-self-adjoint ones. However the two experiments both concern self-adjoint IGs, as far as I understand.
- In page 5, the authors claim that the definition of the sampling operator and its adjoint implies that the empirical estimation of the covariance can be expressed using them. It would be better if the derivation is made more explicit.
- In page 4, the authors claim that while eigenvalues are informative about long-term behavior, they are static properties and fail to capture transient dynamics or the full time evolution of the process. Can the authors elaborate a bit on this? One might argue that eigenvalues decaying rapidly over time can provide information about the transient dynamics.
- In page 7, the authors claim that the eigenfunction learning bound for the considered TO learning method becomes vacuous as the time-lag goes to 0. Can the authors elaborate a bit more on why? I was not able to fully understand the description.
Methods And Evaluation Criteria: The proposed method is suitable for the problem setup considered in the paper. For the benchmark dataset, the scope of the experiment could be improved to include non-time-reversal-invariant Markov processes, which could add a strong supporting evidence to the paper; please see the Claims and Evidence section.
Theoretical Claims: I carefully checked the soundness of the problem setup, including Markov semigroups, transfer operator and their infinitesimal generator, the resolvent operator, and their spectral decomposition including the compatibility of eigenfunctions and they are sound. I did go over the derivation of the bound of the integration bias but could not carefully verify the correctness due to my limited expertise in spectral perturbation theory. I did not carefully check the proofs for the other bounds.
Experimental Designs Or Analyses: Please see the Methods and Evaluation Criteria section.
Supplementary Material: Please see the Theoretical Claims section.
Relation To Broader Scientific Literature: Resolvent operators of Markov semigroups and their expression as Laplace transform has been studied theoretically (Engel & Nagel, 1999) but has not been leveraged in machine learning context as far as I know. The closest ideas I am aware of are identifying Koopman eigenfunctions directly using Laplace transform or similar integral operators (Mohr & Mezic, 2014; Bevanda et al., 2023), which I think are related but not exactly the same to the approach in this work.
Engel & Nagel, One-Parameter Semigroups for Linear Evolution Equations (1999)
Mohr & Mezic, Construction of Eigenfunctions for Scalar-Type Operators via Laplace Averages with Connections to the Koopman Operator (2014)
Bevanda et al. Koopman Kernel Regression (2023)
Essential References Not Discussed: Important related works are properly discussed in the paper.
Other Strengths And Weaknesses: Strengths
- Originality: The paper is original in its use of the resolvent operator and its expression as Laplace transform of the Markov semigroup in the non-parametric operator learning context.
- Significance: The paper is significant in its generality (sectorial IGs, and possibly handling irregular sampling intervals) and addressing of the issues of the current TO and IG learning methods (sensitivity to small time-lag and spuriousness due to the unboundedness of IGs, respectively).
- Clarity: The paper thoroughly introduces the operator theory backgrounds necessary to understand the theoretical results.
Weaknesses:
- Clarity: The readability of the paper could be improved, especially in its description of the kernel learning algorithm in Sections 4 and 5. Currently it requires that the reader is already familiar with Kostic et al. (2022), Kostic et al. (2024), and Turri et al. (2023).
- For other weaknesses, please see the previous sections.
Kostic et al. Learning Dynamical Systems via Koopman Operator Regression in Reproducing Kernel Hilbert Spaces (2022)
Kostic et al. Learning the Infinitesimal Generator of Stochastic Diffusion Processes (2024)
Turri et al. A Randomized Algorithm to Solve Reduced Rank Operator Regression (2023)
Other Comments Or Suggestions: Minor typos or ambiguities:
- Page 1: trajecotry -> trajectory
- Page 1: In the introduction it is stated that current kernel methods for TO learning require kernel function selection, but this is a shared limitation with the proposed method.
- Page 2: "the space of functions on [?] that are square-integrable"
- Page 3: Below equation (4), which space is $L^2$? The same goes for the first line of page 13.
- Page 3: Below equation (5), should "the spectrum of $G$" be "the spectrum of $L$"?
- Page 3: In equation (8), the symbol $f$ refers to both the observable and right eigenfunctions of $L$
- Some parts in the appendix denote the resolvent operator by $L_\mu$, while the main text uses $R_\mu$
- Page 3: In equation (9), the reason why the adjoint $G^*$ is used is too implicit. As far as I understand, it is because technically we are regressing the embedded Perron-Frobenius operator (under the terminology of Klus et al. 2019), not the transfer operator (Koopman operator).
- Page 3: "an universal approximation result holds true the minimizers of for the minimizer (9)"
- Page 5: Below equation (18), which joint distribution does $\rho_{j\Delta t}$ represent?
- Page 5: "Ridge Regression solution of the regularized risk , that is without"
- Page 7: What does $\mathrm{gap}_j(R_\mu)$ precisely represent? I might have missed it but it is not defined in the main text.
- Page 13: "the RKHS associated to kernel $k\mathcal{X}\times\mathcal{X}\to\mathbb{R}$"
- Page 15: "(?)Theorem 2]Kostic2022"
Klus et al. Eigendecompositions of Transfer Operators in Reproducing Kernel Hilbert Spaces (2019)
Questions For Authors: - In page 5, the authors assume stationarity $X_0\sim\pi$ to simplify the analysis. How restrictive is this, and what can be done if we were to remove this assumption?
- In page 1, the authors mention that understanding long-term behavior is essential for accurate forecasting and interpretation, and in page 4 that learning the IG alone is insufficient for forecasting the process and estimating the spectral decomposition is of greater interest. Indeed, the experiments focus on learning the eigenstructure. Can the authors comment on what is additionally needed to use the proposed method for forecasting?
- In Appendix C.2.4., I am not sure if I understood the first paragraph correctly. Am I right in understanding that the control of the integration term is a novel technical contribution of this paper, which was not established or considered in the previous TO and IG learning methods?
- Out of curiosity, can the authors comment on how the proposed approach is related to the use of Laplace transforms or similar integral operators in Mohr & Mezic, (2014) and Bevanda et al., (2023)?
Mohr & Mezic, Construction of Eigenfunctions for Scalar-Type Operators via Laplace Averages with Connections to the Koopman Operator (2014)
Bevanda et al. Koopman Kernel Regression (2023)
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s insightful evaluation and valuable comments. Below, due to space limitations, we briefly address the highlighted weaknesses and respond to the reviewer’s questions, committing to incorporating all feedback in our revision. If needed, we can elaborate further on each point.
## Additional empirical evidence:
Following the reviewer's suggestion, we have expanded our empirical evaluation to include IG baselines and support the claims summarized in Table 1. For a brief discussion on the [results](https://green-chantal-92.tiiny.site) of our additional experiments, please see our reply to reviewer __nvbC__ .
## Algorithms
We will expand Appendix B with detailed derivations, including a discussion of the sampling operator $\hat{S}$. Briefly, by the definition of $\hat{S}$ for $h \in \mathcal{H}$, we derive $\hat{S}^* \hat{S} h = n^{-1/2} \hat{S}^* [h(x_1) \dots h(x_n)]^{\top} = n^{-1} \sum_{i \in [n]} \phi(x_i) h(x_i) = [n^{-1} \sum_{i \in [n]} \phi(x_i) \otimes \phi(x_i)] h$, concluding that the empirical covariance can be written as $\hat{S}^* \hat{S}$.
## Transient dynamics
As discussed in Appendix A.3, when $L^* L \neq LL^*$, eigenvalues alone may not fully explain the evolution of linear dynamics. The norm of the resolvent relates to transient growth in stable systems via the pseudo-spectrum (Trefethen & Embree, 2020). We will expand this discussion in the revision.
## TO as $\Delta t \to 0$
The core idea is that $A_{\Delta t} = e^{\Delta t L }\to I$, which implies that the spectral gap vanishes, thereby rendering the eigenfunction bounds vacuous.
## Minor issues
We will correct all typos. Additionally, we remark:
The presence of $G^*$ in the risk can be understood from two perspectives. One follows from TO’s risk formulation in the space of observables, which, due to the kernel trick, translates into vector-valued regression (Kostic et al., 2022). The other, as the reviewer notes, considers the Perron-Frobenius operator, (Klus et al., 2019), on probability distributions, linking the estimator to the MMD metric induced by a characteristic kernel. We will expand Section A.2 to introduce regression-based operator learning for both TO and IG methods.
$\rho_t$ denotes the distribution of $(X_s, X_{s+t})$ and $\mathrm{gap}_i$ is introduced in the last line of Theorem 6.3. A table of major notations will be added at the beginning of the appendix.
## Questions
[Q1] When $X_0$’s distribution is not invariant, variance analysis requires adapting the method of blocks for mixing with Bernstein inequalities in Hilbert spaces for independent (but non-identically distributed) variables. This complicates the effective dimension, as the covariance operator w.r.t. the invariant distribution is replaced by one w.r.t. the ergodic mean. While technically feasible, this adds complexity, so we opted for a simpler approach to highlight key contributions.
[Q2] Knowing IG implies knowledge of the SDE, requiring a numerical solution for process realizations. In contrast, given IG’s spectral decomposition, Equation (8) provides directly the solution, since it enables forecasting of full state distributions beyond just the mean (e.g., $f$ as an indicator function). See reply to __CrN8__.
[Q3] To our knowledge, our technique for controlling Bochner integral approximation errors for linear operators is novel, possibly extending beyond TO/IG methods. We will emphasize this in the contributions paragraph.
[Q4] As the reviewer rightly points out, the Laplace transform is a well-known analytical tool in the study of continuous operator semigroups. In the context of deterministic dynamical systems, it was used by Mohr & Mezic, (2014) to investigate the spectral decomposition of the Koopman operator (TO for deterministic dynamical systems). Their work underscores its potential but leaves efficient numerical methods as an open problem. In contrast, we consider SDEs and not only design numerical methods based on the Laplace transform, but also develop statistical learning theory, providing sharp bounds on spectral estimation. Additionally, we remark that the results of (Bevanda et al. 2023) are also limited to deterministic dynamical systems. There, the Laplace transform served more as inspiration than an explicit component of the method, as the authors built an RKHS via finite-horizon integration of kernel features over trajectories to formulate Koopman operator regression and study learning bounds.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing the detailed clarifications. I think this is a solid work, the main reason that keeps me from raising my score to 5 is related to the concern of reviewer iVn3--a good amount of additional effort would be necessary for the ideas in this paper to (practically) impact the general audience of ICML.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their feedback and appreciation of our work. We would just like to note that **we have additionally addressed concerns on the impact**, which, due to limited space in the rebuttal, we couldn’t emphasize enough in our reply to __iVn3__ while still answering all their questions on the mathematical framework we have studied. We believe that this discussion, which will be included in the revised manuscript, shows how that our work can significantly impact the ML community and inspire more practically oriented follow-up interdisciplinary research. | null | null | null | null | null | null |
A Variational Framework for Improving Naturalness in Generative Spoken Language Models | Accept (poster) | Summary: This paper proposes a variational approach to speech-language modelling in contrast to traditional auto-regressive models. The aim is to capture information other than semantics.
## update after rebuttal
I checked the results in Appendix H and I think the results are interesting. Why not add those results into the main paper (since we as reviewers are not required to see Appendices)? I increase my score to 4 to acknowledge this effort.
Claims And Evidence: Yes. I can find evidence in section 5 supporting the authors claim that the variational method improves the naturalness of synthesised speech.
Methods And Evaluation Criteria: Evaluation is comprehensive.
Theoretical Claims: Not applicable.
Experimental Designs Or Analyses: The experimental design is sound and valid. My main concern is how this method is useful for more practical downstream tasks such as ASR, emotion or speaker recognition, to reflect that capturing the additional (mainly paralinguistic) information is useful. I strongly encourage the authors to conduct at least 2 of the above practical tasks using the variational speech LM and compare it to token-based speech LM to see any potential benefits of using variational methods.
Supplementary Material: Yes. I listened to the audio samples generated. They generally align with what the authors claim.
Relation To Broader Scientific Literature: This is to my best knowledge the first work to incorporate variational framework in speech language models.
Essential References Not Discussed: G. Sun et al. "Generating diverse and natural text-to-speech samples using a quantized fine-grained vae and autoregressive prosody prior", In Proc. ICASSP. 2020.
This should be discussed in section 3.1. This paper is the first to apply a trainable auto-regressive prior in speech synthesis.
Other Strengths And Weaknesses: The experiment was conducted using LibriSpeech and Libri-light, which are datasets with quite small variabilities other than semantic information. I believe the variability remains in the speaker representation space, which is not explicitly reflected in the experimental design.
Other Comments Or Suggestions: Actually, Fig. 1 can be improved by unifying the font.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for recognizing both the robustness of our experimental design and the novelty of our contribution. Below, we address each concern raised:
## Emotion and Speaker Recognition
> My main concern is how this method is useful for more practical downstream tasks such as ASR, emotion or speaker recognition, to reflect that capturing the additional (mainly paralinguistic) information is useful. I strongly encourage the authors to conduct at least 2 of the above practical tasks using the variational speech LM and compare it to token-based speech LM to see any potential benefits of using variational methods.
We appreciate this valuable feedback. We would like to direct the reviewer to Appendix H, where we present our comprehensive evaluation results on speech emotion recognition and speaker recognition tasks. In Tables 8 and 9, our results suggest that the variational features encode speaking styles and prosodic patterns there are useful for both tasks. We kindly direct the reviewer to Appendix H for more detailed experimental setup, results and analysis. Additionally, we would like to note that our main objective remains to improve naturalness of speech synthesis, and the side experiments provide additional evidence that our method is capable of achieving that.
## Additional References
> G. Sun et al. "Generating diverse and natural text-to-speech samples using a quantized fine-grained vae and autoregressive prosody prior", In Proc. ICASSP. 2020.
This should be discussed in section 3.1. This paper is the first to apply a trainable auto-regressive prior in speech synthesis.
Thank you for this important suggestion. We will incorporate this reference in Section 3.1 and properly acknowledge its contribution.
## Figure Presentation
> Actually, Fig. 1 can be improved by unifying the font.
We appreciate this attention and will unify the font in Fig. 1.
## Dataset Selection
> The experiment was conducted using LibriSpeech and Libri-light, which are datasets with quite small variabilities other than semantic information. I believe the variability remains in the speaker representation space, which is not explicitly reflected in the experimental design.
We thank the reviewer for their valuable feedback. We acknowledge that LibriSpeech and Libri-light are not considered highly expressive. We evaluate on these datasets since they are standard benchmarks in the literature [1, 2, 3]. Importantly, our proposed approach improves synthesis naturalness even on these less expressive datasets. This suggests that our method would likely yield even greater improvements on more expressive datasets, where natural expressive speech synthesis presents additional challenges.
We hope these responses adequately address the reviewer's concerns and help in the evaluation of our work.
## References
[1] E. Kharitonov, et al., Text-free prosody-aware generative spoken language modeling, in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 8666–8681, Dublin, Ireland, May 2022.
[2] Z. Borsos, et al., AudioLM: A language modeling approach to audio generation, IEEE/ACM Transactions on Audio, Speech, and Language Processing, 31:2523–2533, 2023.
[3] K. Lakhotia, et al., On Generative Spoken Language Modeling from Raw Audio, Transactions of the Association for Computational Linguistics, 2021. | Summary: This paper automatically learns continuous speech attributes (such as pitch, energy, spectrum) through VAE, and jointly model them with semantic tokens to improve the naturalness and language fidelity of generated speech. Experiments show that this method significantly outperforms the baseline model in subjective naturalness scores and does not require manual design of rhythmic features.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
The ELBO decomposition and KL divergence derivation are mainly checked to comply with the standard theory of variational autoencoder and are correct.
Experimental Designs Or Analyses: Yes.
The experimental design verifies the effectiveness of the proposed variational method by comparing the three baseline methods of Token-LM, Token-LM+Pitch and Token-LM+Acoustic, and analyzes its regulatory effect on information encoding and generation quality through hyperparameter β and γ ablation studies.
Supplementary Material: The supporting materials provide a comparison of the different systems of audio, corresponding to the better performance demonstrated in the paper in terms of the speech reconstruction evaluation and speech continuation evaluation.
Relation To Broader Scientific Literature: This paper continues the idea of [1] to improve the naturalness of generation by combining modeling language and prosodic information, but abandons the hand-designed fundamental frequency features and moves towards end-to-end learning.
In contrast to the discrete acoustic token method of [2], the advantages of continuous latent variables in prosodic modeling are verified, and the information loss caused by multi-stage discretization is avoided.
[1] Kharitonov, E., Lee, A., Polyak, A., Adi, Y., Copet, J., Lakhotia, K., Nguyen, T. A., Riviere, M., Mohamed, A., Dupoux, E., and Hsu, W.-N. Text-free prosody-aware generative spoken language modeling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 8666– 8681, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long. 593.
[2] Borsos, Z., Marinier, R., Vincent, D., Kharitonov, E., Pietquin, O., Sharifi, M., Roblek, D., Teboul, O., Grangier, D., Tagliasacchi, M., and Zeghidour, N. Audiolm: A language modeling approach to audio generation.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: In this paper, a good balance is struck between innovation and practicability, and the continuous prosodic features are automatically learned by the VAE, which not only avoids the cumbersome process of manually designing prosodic features in the traditional method, but also improves the naturalness of the generated speech by jointly modeling semantic and prosodic information.
The experimental design is also rigorous, and the validity of the method is verified by multi-dimensional indicators (reconstruction quality, language modeling ability, subjective scoring), and the subjective evaluation adopts standardized processes, which enhances the credibility of the results.
But this "end-to-end" design idea does not seem to be new in the field of speech synthesis.
And the performance of the model is highly dependent on the tuning of β and γ, which may limit the robustness of the method under different data distributions, as wel as the experiment only verifies the English dataset, while the prosodic patterns of different languages are quite different, and it is unclear whether the model needs to be retrained or adjusted hyperparameters.
But overall, the paper proposes a valuable solution to the problem of naturalness of speech generation, so I give a weak acceptance.
Other Comments Or Suggestions: No.
Questions For Authors: see other strengths and weakness
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thorough review and acknowledging the strengths of our paper, including the balance between innovation and practicability, the automatic learning of continuous prosodic features, and our rigorous experimental design.
Below, we address the feedback from the reviewer:
## End-to-End Design Novelty
> This "end-to-end" design idea does not seem to be new in the field of speech synthesis.
We thank the reviewer for this observation. End-to-end is indeed the current trend for building speech synthesis models. Our specific contribution lies in making the prosodic features end-to-end learnable with the speech language model through the integration with VAE. This approach enables automatic learning of continuous prosodic features without manual feature engineering, which we believe represents a meaningful advancement in the field.
## Hyperparameter Sensitivity
> The performance of the model is highly dependent on the tuning of β and γ, which may limit the robustness of the method under different data distributions.
This is an insightful observation. We agree that hyperparameter sensitivity is an important consideration for most work in this field. In Section 8: Limitations and Future Work, we have acknowledged this challenge and identified exploring automated methods for hyperparameter tuning as an important direction for future research. The current work serves as a first and crucial step in demonstrating that the proposed method effectively improves the prosodic naturalness of speech language models.
## Cross-Lingual Adaptability
> The experiment only verifies the English dataset, while the prosodic patterns of different languages are quite different, and it is unclear whether the model needs to be retrained or adjusted hyperparameters.
This is an excellent point that highlights an important direction for future research. We hypothesize that our approach of learning continuous prosodic features automatically rather than using hand-designed features offers inherent advantages for cross-lingual adaptation. Unlike rule-based systems that require language-specific expertise, our VAE framework can potentially discover and model language-specific prosodic patterns without human supervision.
We will add this discussion to the limitations section and highlight it as a direction for future work. Our approach, which learns continuous representations rather than relying on predefined features, may be well-suited to adapt to the diverse prosodic patterns across languages, though this remains to be verified through additional research.
We thank the reviewer for the valuable suggestions, which will help direct our ongoing research efforts.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed reply. I acknowledge the contributions of this study. However, the author's reply is not enough to dispel my concerns about the weaknesses mentioned above. Therefore, I tend to keep my original score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful consideration of our rebuttal and for your active response. We sincerely appreciate your acknowledgment of our contribution to the literature as well as the feedback that helps us refine our work. | Summary: This paper proposes a variational framework to enhance the naturalness of generative spoken language models by jointly learning continuous paralinguistic features and discrete semantic tokens. Traditional token-based models often neglect prosodic information, leading to unnatural speech. The authors address this by integrating a variational autoencoder (VAE) with an autoregressive prior to automatically encode continuous speech attributes (e.g., pitch, energy) alongside pre-extracted semantic tokens.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: The method is well-written and easy to follow.
Some questions:
1. This paper proposes VAE with autoregressive prior, to learning acoustic features to enhance naturalness in semantic token based LLM. However, some work such as SpeechTokenizer which uses RVQ-based methods and distills semantic tokens in the first layer can also address the issues: present the paralinguistic features while modeling the semantic features. The author should discuss them.
[1] SpeechTokenizer: Unified Speech Tokenizer for Speech Large Language Models
2. Why use CER instead of WER? The speaker similarity metric is not reported.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: The comparison between the proposed and baseline methods is not sufficient. Generally, this work models the residual acoustic features to minimize the information loss of semantic token based LLM generation.
Generally, there is some existing (not limited) serial of works: 1) some works such as SpeechTokenizer [1] directly distill semantic tokens in the first layer of RVQ and use AR to generate the semantic tokens and NAR model to predict the rest. This is different from the setting in "Token-LM acoustic" since the acoustic tokens are learned without semantic token supervision, and the generative process is not well-designed. 2) some works only use AR/NAR methods to generate the semantic tokens and model the acoustic features by an additional diffusion model. It can also model the paralinguistic features to enhance the naturalness. These works include but not limited to MaskGCT[2], CosyVoice[3], tortoise-tts[4]
[1] SpeechTokenizer: Unified Speech Tokenizer for Speech Large Language Models
[2] MaskGCT: Zero-Shot Text-to-Speech with Masked Generative Codec Transformer
[3] Cosyvoice: A scalable multilingual zero-shot text-to-speech synthesizer based on supervised semantic tokens
[4] tortoise-tts: Better speech synthesis through scaling.
My questions are:
* 1. The author should compare with more baselines, such as SpeechTokenizer, MaskGCT, et. al.
* 2. The author should discuss more with existing works. why such VAE is necessary? What benefits can it bring compared with existing works?
* 3. The "Token-LM acoustic" shows good reconstruction results but weak generation performance. But I think it comes from: 1) the tokenizer used is not reasonable, at least, the author should compare it with SpeechTokenizer; 2) the generative model is also not well designed. I suggest using AR to predict the first semantic tokens and NAR to predict the residual acoustic features which use semantic tokens as conditions.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: No.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: The strengths and weakness are discussed in the previous sections.
Other Comments Or Suggestions: No.
Questions For Authors: The questions are discussed in the previous sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their constructive feedback. We address your concerns comprehensively below. We present the Tables of new experiment results [here](https://anonymous.4open.science/api/repo/icml-rebuttal-BDD8/file/ICML-rebuttal-1.pdf).
## Comparison with Additional Baselines
> Some work such as SpeechTokenizer which uses RVQ-based methods and distills semantic tokens in the first layer can also address the issues: present the paralinguistic features while modeling the semantic features. The author should discuss them.
> The author should compare it with SpeechTokenizer; 2) the generative model is also not well designed. I suggest using AR to predict the first semantic tokens and NAR to predict the residual acoustic features which use semantic tokens as conditions.
We appreciate your points about comparing our approach with additional baselines. We acknowledge that our comparison could be more comprehensive and have implemented a SpeechTokenizer-based approach as suggested.
**SpeechTokenizer Baseline**: We used the official SpeechTokenizer checkpoint for the encoder and decoder components (semantic and acoustic tokens ↔ speech). But we trained our own NAR decoder for predicting acoustic tokens from semantic tokens, as the original implementation requires text input (it was originally designed for TTS). We also re-trained the autoregressive model for semantic token prediction due to token set differences (1024 vs 200 tokens).
As shown in [Table 2](https://anonymous.4open.science/api/repo/icml-rebuttal-BDD8/file/ICML-rebuttal-1.pdf), our approach still achieves superior naturalness and meaningfulness scores compared to the new established SpeechTokenizer baseline. Perceptual listening revealed that the SpeechTokenizer approach indeed improves upon Token-LM in prosody patterns. However, it introduces more audio quality artifacts compared to Token-LM which uses a diffusion-based decoder to convert semantic tokens to speech. These artifacts offset the benefits of better prosody naturalness in the human evaluation scores. Note that we re-evaluate all methods in the human evaluation for fair comparison.
We will revise Sections 5 and 6 to incorporate these new results.
## Roles of VAE and Comparison with Existing Works
> Why such VAE is necessary? What benefits can it bring compared with existing works?
Our approach offers distinct advantages over existing methods:
1. **Compared to SpeechTokenizer and Token-LM**: In these baselines, the AR model accesses only semantic tokens. Our AR model additionally accesses variational features encoding prosody, better utilizing the stronger AR model for improved naturalness.
2. **Compared to Token-LM+Acoustic**: The acoustic (RVQ) tokens from this baseline is extracted without supervision from the AR model, while our approach jointly optimizes variational features for both reconstruction and AR prediction, where the encoder receives training signal from the AR model.
3. **Compared to Token-LM+Pitch**: Our approach doesn't require hand-engineered features, as it learns prosodic information in an unsupervised fashion.
The VAE framework enables joint learning of variational features optimized for both reconstruction and AR prediction, yielding superior naturalness and meaningfulness of the syntheses. We'll revise Section 6 to emphasize these advantages.
## Additional Metrics
> Why use CER instead of WER?
We originally chose CER following prior work [1], which evaluates pronunciation errors more accurately. As requested, we've added WER measurements in [Table 1](https://anonymous.4open.science/api/repo/icml-rebuttal-BDD8/file/ICML-rebuttal-1.pdf). The WER results follow the same trend as CER.
> The speaker similarity metric is not reported.
In [Table 1](https://anonymous.4open.science/api/repo/icml-rebuttal-BDD8/file/ICML-rebuttal-1.pdf), we add speaker similarity metrics, which is calculated using the cosine similarity of the speaker embeddings extracted from a pre-trained speaker verification model. Our method achieves a speaker similarity score slightly lower compared to the baseline approaches (0.42 v.s. 0.45). However, as the reviewer can verify from our audio samples, the perceptual difference in speaker identity is minimal, while naturalness and intelligibility improvements are noticeable. This represents a reasonable trade-off for most speech applications.
We believe these additions and clarifications address the reviewer's concerns and strengthen our paper.
## References
[1] K. Lakhotia, et al., On Generative Spoken Language Modeling from Raw Audio, Transactions of the Association for Computational Linguistics, 2021. | Summary: The authors propose a variational approach that directly encodes desired information from raw audio inputs, addressing the challenge of preserving prosody when modeling discrete speech codes primarily focused on phonetic content (such as HuBERT tokens). Their variational approach shows improved performance compared to baselines such as Token-LM (phoneme-based tokens only), Token-LM+acoustic (phoneme and acoustic tokens), and Token-LM+acoustic baseline.
Claims And Evidence: The comparisons in this work mainly focus on Token-LM models derived from self-supervised models like HuBERT, which do not use a reconstruction loss. However, if the model were compared to more recent approaches using more compressed acoustic tokens such as WavTokenizer, it would likely provide better insight into prosody than an LLM trained only on SSL-based semantic tokens. This weakens the motivation behind the proposed approach.
Methods And Evaluation Criteria: Not really. The use of a VAE to encode information absent in SSL-based discrete semantic tokens is reasonable. However, the model employs a diffusion decoder to reconstruct continuous representations, requiring 100 iterative steps for generation. This significantly increases computational complexity compared to predicting discrete acoustic codecs, making it unsuitable for real-time speech generation applications.
Theoretical Claims: The VAE formulation is correct, and I have reviewed the loss derivation provided in the appendix.
Experimental Designs Or Analyses: In terms of experimental design, the baselines used are re-implemented versions of existing models rather than direct comparisons with the original models. As a result, the validity of the reported evaluation results is somewhat diminished.
Supplementary Material: I have reviewed the model architecture and training details.
Relation To Broader Scientific Literature: While this approach could serve as an alternative to acoustic codecs, its reliance on a separate diffusion model for continuous feature generation increases computational cost significantly. Consequently, it is unlikely to be practical for real-time generation or streaming scenarios.
Essential References Not Discussed: .
Other Strengths And Weaknesses: Weakneeses
The authors compare their approach only with re-implemented models rather than conducting direct comparisons with existing baseline models. Since the baseline is based on Token-LM, which models very small discrete tokens with only 200 clusters, the results are not sufficiently comprehensive.
Methodologically, as mentioned above, it is difficult to determine what advantages this approach has over LLM-based models that predict acoustic tokens with delay patterns. Additionally, the reliance on a diffusion decoder raises concerns about whether this method can be used in streaming scenarios.
Other Comments Or Suggestions: .
Questions For Authors: What is the real-time factor (RTF) of the model's generation speed?
Can the mel-spectrogram be decoded only after all continuous features have been generated?
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We appreciate both the recognition of our formulation's correctness and the critical feedback from the reviewer. We address the feedback below.
## Computational Complexity
>The model relies on a diffusion decoder, which increases computational complexity compared to predicting discrete acoustic codecs, making it unsuitable for real-time speech generation applications.
We would like to clarify several points:
- **Decoder flexibility**: Our variational approach isn't tied to a specific decoder. We chose diffusion for easier training, but our method works with various decoding strategies, including discrete token prediction. There is no constraint on the decoder $\theta$ used to parameterize $p_\theta(\mathbf{X}\mid\mathbf{Z})$. To validate this, we conducted additional experiments replacing the diffusion decoder with a token decoder which converts semantic tokens to SpeechTokenizer tokens [1], then used a pre-trained SpeechTokenizer decoder to generate speech. [Table 1](https://anonymous.4open.science/api/repo/icml-rebuttal-BDD8/file/ICML-rebuttal-2.pdf) shows that this variant (*Proposed + Token Decoder*) achieves similar or better performance compared to the diffusion approach.
- **Fair comparison**: We used the same diffusion decoder architecture across all comparing methods. We ensure a fair comparison where only the modeling approach varies, isolating the impact of our variational method.
- **Research focus**: Our primary contribution is on improving prosodic naturalness with the variational approach, not on optimizing for real-time applications. The improvement in naturalness MOS validates the effectiveness of our method.
- **Reasonable RTF**: On a single NVIDIA L40S GPU with batch size 1, our current implementation achieves an RTF of 0.68 (<1). While this is not our focus, it shows that this approach remains practical. Importantly, the diffusion decoder only operates once after the AR model completes its generation, so the additional latency introduced is not as significant as it might initially appear.
We will make adjustments to Sections 3 and 4 to emphasize these points and avoid confusion.
>Can the mel-spectrogram be decoded only after all continuous features have been generated?
Yes. Our current implementation with diffusion decoder and HiFi-GAN vocoder is not streamable. However, as mentioned above, our framework is compatible with any type of decoder, including streamable ones.
## Use of Re-implemented Models
>The baselines used are re-implemented versions of existing models rather than direct comparisons with the original models.
We re-implemented the baseline models following standard scientific methodology to ensure a controlled and fair comparison. This decision was made for several critical reasons:
- **Architectural consistency**: We needed to maintain consistent architecture and model size across all methods. Our re-implementations use identical decoder and autoregressive model architectures, differing only in the specific modeling approach being evaluated. This isolates the impact of our variational method, which is the primary contribution of our work.
- **Data consistency**: We ensured all models were trained on exactly the same data. Off-the-shelf implementations typically differ in training datasets, preprocessing pipelines, and model sizes, introducing confounding variables that would make it impossible to attribute performance differences specifically to our methodological innovation.
## Comparison with More Recent Approaches
>If the model were compared to more recent approaches using more compressed acoustic tokens such as WavTokenizer, it would likely provide better insight into prosody than an LLM trained only on SSL-based semantic tokens.
We include the Token-LM + Acoustic baseline, which employs acoustic tokens from residual vector quantization (RVQ), capturing information beyond what semantic tokens.
To compare with recent approaches on acoustic tokenization, we have included an additional comparison with SpeechTokenizer in [Table 1](https://anonymous.4open.science/api/repo/icml-rebuttal-BDD8/file/ICML-rebuttal-2.pdf) (details in our response to Reviewer XtBM). This expanded analysis further validates the advantages of our variational method across different tokenization techniques.
## Token-LM Baseline with Limited Clusters
>Token-LM models very small discrete tokens with only 200 clusters, the results are not sufficiently comprehensive.
We specifically chose k=200 for our HuBERT baseline after conducting a sweep across different values (k=50, 200, 1000), finding that k=200 performed the best for language modeling, as detailed in Appendix F. Additionally, the new added SpeechTokenizer baseline has 1024 clusters, which our model still outperforms.
Thank you for your valuable feedback which has helped us articulate our contributions more clearly.
## References
[1] X. Zhang et al., SpeechTokenizer: Unified Speech Tokenizer for Speech Language Models, ICLR, 2024 | null | null | null | null | null | null |
A Mixture-Based Framework for Guiding Diffusion Models | Accept (poster) | Summary: This paper explores solving linear and nonlinear inverse problems—sampling from $p(\mathbf{x}_0|\mathbf{y})$—using pre-trained unconditional diffusion models in a Bayesian framework. To approximate the posterior, the authors iteratively sample from intermediate distributions $p(\mathbf{x}_t|\mathbf{y})$, where the prior is given by the unconditional score network, but the likelihood is intractable. Their key contribution is a Gibbs sampling-based method to approximate and sample from $p(\mathbf{x}_t|\mathbf{y})$. At each denoising step, they perform $R$ Gibbs iterations, each consisting of: (1) $G$ gradient steps to fit a variational distribution, (2) sampling from the unconditional diffusion model using $M$ DDPM steps, and (3) a closed-form sampling step from the noising process.
The method is evaluated on linear and nonlinear inverse problems in both pixel and latent space, as well as on a linear audio source separation task. It performs well on pixel-space tasks (though sometimes underperforms competitors) and generally surpasses benchmarks in latent-space tasks.
Claims And Evidence: The paper claims that the proposed method achieves performance on inverse problems that is either comparable to or better than related approaches. While the study includes a broad and relevant set of baselines, a few improvements could strengthen the validity of these claims:
- **Report runtime**: Performance metrics alone are insufficient without the corresponding runtime. Including runtime (and memory requirements, if relevant) would provide a more complete comparison.
- **Include standard deviations / confidence intervals**: Reporting only the mean values of LPIPS, PSNR, and SSIM does not fully convey the statistical significance of the results. Adding standard deviations / confidence intervals would help assess statistical significance.
- **Add FID as a metric**: This is commonly used in inverse problems solved with pre-trained diffusion models, so I would have expected to see it in the image tasks.
Finally, the authors claim that a strength of the method is the possibility to adjust the number of Gibbs sampling steps $R$, besides the number of gradient steps $G$ from the variational approximation, to enhance performance. However, allocating more compute to Gibbs sampling rather than gradient steps appears beneficial only in the phase retrieval task, while for source separation, increasing $G$ seems to be the better strategy. In the other image-based experiments (aside from phase retrieval), only results for $R=1$ are reported, likely because this provided the best tradeoff between performance and runtime. While this is not necessarily an issue, the paper should more clearly specify in which tasks increasing Gibbs sampling steps leads to improvements and when prioritising gradient steps is the preferable approach.
Methods And Evaluation Criteria: The benchmark datasets do make sense, and I appreciate the fact that the authors provide experiments using both pixel- as well as latent-space models, and on both linear and nonlinear inverse problems. The audio task is also a good addition.
However, I would also suggest evaluating FID to provide a more complete assessment of generative quality. Additionally, as mentioned earlier, reporting standard deviations or confidence intervals alongside the mean metrics would help assess the statistical significance of the results.
For a more comprehensive performance analysis across diverse inverse problems, it would have been useful to include at least one example with a non-Gaussian likelihood and perhaps a setting with higher noise. However, I appreciate that computational constraints may have limit the amount of experiments that can be conducted.
Theoretical Claims: Yes, I did go through the proofs and, as far as I am concerned, they are correct. However, I do believe that introducing $s$ as a state in the Gibbs sampling procedure is better motivated theoretically, but I agree that it is computationally infeasible.
The only equations I disagree with are:
- Equation (14) where the integration in the denominator should be over $\mathrm{d} \mathbf{x}_{s', t'}$
- The one in the appendix (line 800 in A.5), though I believe it is likely a typo.
Experimental Designs Or Analyses: The authors provide details on the sources of the baseline implementations, which generally seem reasonable. The tasks considered are fairly standard and have been explored in previous works, including those the authors compare against.
I do wonder, however, whether better hyperparameter choices for the baselines could lead to improved performance. While tuning baselines extensively may be beyond the scope of the paper, I was surprised by the underperformance of PGDM vs. DPS. In the linear setting, I would have expected PGDM to perform comparably to or better than DPS. If this was not observed, it might have been because of suboptimal hyperparameter settings for PGDM.
The paper also makes several heuristic choices, which I feel are decently motivated, but might benefit from some extra investigation. Some of these are mentioned in the paper, but they include: the weight sequence, the number of DDPM steps $M$, the interaction between $R$ and $G$. However, there are so many hyperparameters, that I appreciate it is hard to gain a comprehensive of how each one of these affects performance, and whether this is task-dependent or not.
Supplementary Material: Yes, I reviewed the **A. Methodology Details** section and **B. 4 Implementation of the competitors** section.
I also looked over the code implementation from the zip file of MGDM, DPS, PGDM, and the hyperparameters used for these methods.
Relation To Broader Scientific Literature: The paper contains an extensive overview of the relevant literature, highlighting the most relevant works that tackle Bayesian inverse modelling leveraging pre-trained diffusion models. The alternative likelihood approaches are well captured, the authors mention the relevant SMC approaches, and also compare to the most closely related methods based on Gibbs sampling. The detailed comparisons in Appendix A.5. are particularly useful in clarifying the distinctions between MGDM and the closely related DAPS and MGPS methods.
Essential References Not Discussed: Not that I am aware of.
Other Strengths And Weaknesses: **Strengths**
- S1: Able to handle pixel- and latent-space diffusion out of the box.
- S2: Able to tune performance by tweaking two different hyperparameters: $R$ and $G$, although also see weakness 3.
- S3: Clear comparison to related methods.
- S4: Strong empirical performance in the majority of cases, although this should be analysed alongside runtime metrics and also include standard deviations besides mean metrics.
**Weaknesses**
- W1: One of the main weaknesses is the lack of computational time analysis when comparing to the baselines. In my perspective, this is crucial to holistically compare different related posterior sampling methods, especially when some metrics are so similar.
- W2: Another weakness from the evaluation side is, as mentioned above, the lack of standard deviations / confidence intervals in the results.
- W3: Although the authors stress that the ability to increase performance through increasing $R$ is a strength of the algorithm, this doesn’t seem to be the best strategy in all cases (i.e. directing compute to gradient steps is more lucrative in source separation). In general having too many hyperparameters can become overwhelming, especially if they are task- and domain- dependent. I am not convinced that currently the paper contains enough settings for the authors to make some clear recommendations.
- W4: There is also limited exploration of the effect of $M$, the number of DDPM steps. Why did the authors go with $M=20$ and have other choices been explored too? Is it clear that having a fixed $M$ value is the optimal choice, rather than potentially having it depend somehow on $s$?
- W5: All tasks consider either no noise or Gaussian noise with fixed $\sigma_y=0.05$. This does not make it clear whether the method would perform well under non-Gaussian likelihoods.
Other Comments Or Suggestions: - Line 201 right: “Treating s **as** fixed”
- Line 315 right - you only compare to seven competitors.
- The reference to Wu et al. [2024] [1] is repeated twice (2024a and 2024b). Same with Zhang et al. [2024] [2]
- Line 303 right: **“repeatedly”** instead of **“repeatidly”**
- Perhaps when mentioning the scaling from works such as DPS (L122 right) it is also worth highlighting that the scaling factors are heuristic, rather than very well theoretically underpinned.
- Appendix A.2. Line 635 Equation (14) - the integration in the denominator should be over $\mathrm{d} \mathbf{x}_{s', t'}$ I think
- Typo on line 800 - I think it should be $\pi^{\mathbf{y}}_{0|t+1}(\mathbf{x}_0|\mathbf{x}_{t+1})$
- Typo on line 956: “their" instead of "there variables”
- Line 1009: What do you mean by you “exposed” the coupling parameter $\rho$?
- Typo 1036: “fine-grained” instead of in-grained” + that **are** more coherent
[1] Wu, Z., Sun, Y., Chen, Y., Zhang, B., Yue, Y., and Bouman, K. L. Principled probabilistic imaging using diffu- sion models as plug-and-play priors. arXiv preprint arXiv:2405.18782, 2024a
[2] Zhang, B., Chu, W., Berner, J., Meng, C., Anandkumar, A., and Song, Y. Improving diffusion inverse problem solving with decoupled noise annealing. arXiv preprint arXiv:2407.01521, 2024a
Questions For Authors: Besides the already mentioned questions from the previous sections (mainly from the **Weaknesses** section), I have a few more questions:
- Q1: Remark A.1. from Appendix A.3 - I find it surprising that computing the MC estimate for the squared norm in (17) gives better results than computing it analytically. Any insight as to why this might be the case?
- Q2: Why don't you provide comparisons to MGPS?
- Q3: This might be hard to estimate, but do you think a variational approximation with a non-diagonal covariance matrix would give significant improvements in the method? Or do you think that would be too complicated for relatively small improvements?
- Q4: Do you have any intuition of how accurate the Gaussian approximation for $\hat{\pi}^{\mathbf{y}}_{s|0, t}$ is for (possibly nonlinear) cases with non-Gaussian noise? I realise this might actually depend on how close $s$ is to $t$ and $0$, but just curious to see if you have any intuition about this.
- Q5: For the Gibbs sampler procedure (Algorithm 1) - could you do the steps in another order? I agree that the one you chose seems to be the most natural one, but just wondering whether this is a possibility and how you think it might affect the results. In the limit, it should converge to the same distribution, right?
## Update after rebuttal
I am satisfied with how the rebuttal has addressed my concerns. In particular, I found the addition of the runtime and memory comparisons with competitors valuable additions to the paper. As a consequence, I increased my score to 4 (Accept).
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thorough review of our paper. We address your main weak points/questions below. The additional tables we discuss below can be found here: https://anonymous.4open.science/r/rebuttal-F9B0/rebuttal_tables.pdf
> **[...] at least one example with a non-Gaussian likelihood [..].**
In the initial version, we prioritized extensive comparisons against multiple methods across two modalities. Following your suggestion, we have conducted preliminary experiments with Poisson noise, comparing our method against DPS due to their similar experimental setup. For MGDM, we directly employed the Poisson likelihood without resorting to the Gaussian approximation and maintained the original hyperparameters. In contrast, we used the Gaussian approximation for DPS as outlined in Eqn. (19)[arXiv version], since the Poisson likelihood proved challenging to implement effectively and the original DPS paper lacks guidelines for this scenario. The results are detailed in Table 6, clearly showing that MGDM outperforms DPS across all considered metrics. Additionally, to address potential concerns regarding MGDM’s performance under higher noise conditions, we have included an extra benchmark with a noise standard deviation of 0.3, presented in Table 5.
> **Report runtime [...] Add FID as a metric**
We have implemented all of your recommendations. Please see Fig. 1 and Tables 1, 2, and 3 in the attached document. We refer to our response to R. E9Pa for comments on our runtime and that of the competitors. Our method consistently achieves competitive FID scores across all three evaluated models.
> **better hyperparameter choices**
PGDM has no official implementation, and directly implementing Algorithm 1 yielded suboptimal results. To ensure a fair comparison, we instead utilized the authors' official RedDiff implementation, where the guidance term is scaled by $\\alpha_{t-1}\\alpha_t$ rather than solely by $\alpha_t$ (lines 70 and 73 in https://github.com/NVlabs/RED-diff/blob/master/algos/pgdm.py, with grad_term_weight=1). This modification improved PGDM's performance across most tasks, except for JPEG2, where the original formulation was superior (line 71 in our pgdm.py). Supporting this observation, Boys et al. (2023, Table 2) and Rozet et al. (2024, Table 4) also reported that PGDM generally underperforms DPS. For a fair comparison, we carefully tuned PGDM at 1000 steps to match compute budgets, despite Rozet et al.'s suggestion that fewer steps (around 100) might yield better results. Even under these conditions, PGDM at best matches DPS performance, reinforcing our conclusion that MGDM consistently outperforms both methods. We stress that we promote the reproduction of all benchmarks (as detailed in Appendix B.4 and our openly accessible codebase), to ease verification by the community.
> **[...] number of Gibbs sampling steps [...]**
In Fig. 3 of the paper, we demonstrate that performance consistently improves as the number of Gibbs steps increases for the audio separation task, surpassing all training-free posterior sampling methods. Furthermore, increasing the number of gradient steps allows our approach to exceed Demucs' performance. Across all experiments, we've consistently observed benefits from increasing Gibbs steps, whereas additional gradient steps yield diminishing returns beyond a certain threshold. Specifically, we selected $R=1$ for the image experiments as it provides the optimal balance between computational efficiency and competitive performance relative to other baselines.
> **effect of $M$, the number of DDPM steps [...]**
We selected $M=20$ as it offered a trade-off between computational cost and image quality. While increasing $M$ enhances image quality, it does not significantly improve posterior exploration, in contrast to Gibbs and gradient steps. However, in response to your recommendation, we conducted additional experiments to investigate how varying $M$ and the number of diffusion steps affect performance; results can be found in Figure 2. Furthermore, regarding the choice of the weight sequence, we already provide in Apdx B.1 an empirical analysis of the strategy of sampling $s$ close to $0$ at every step.
- **MC estimate of the KL:** We also found this result surprising, given the performance gains observed. Specifically, we discovered that estimating the KL divergence using the same noise as for the likelihood expectation effectively reduces the variance of the gradient estimator, leading to improved results.
- **Comparison to MGPS** We have prioritized comparing well-established methods and recent contenders. Nevertheless, we commit to comparing with MGPS in the revised version of the paper.
- **non-diagonal covariance** We have already tested adding a low-rank perturbation to the diagonal covariance matrix but found that it doesn’t significantly improve the result. The most significant improvement comes from using a diagonal instead of a scalar matrix.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough and well-structured rebuttal. The additional results and explanations, particularly the runtime and memory comparisons with competitors, are valuable contributions to the paper. I strongly encourage you to incorporate some of the discussion points from your response to reviewer E9Pa into the final manuscript.
Thank you for performing the additional experiments with the non-Gaussian likelihood and higher noise level.
"the Poisson likelihood proved challenging to implement effectively and the original DPS paper lacks guidelines for this scenario"---This is indeed in line with my experience and the experience of other works that attempted to apply DPS for Poisson likelihoods.
**"[...] number of Gibbs sampling steps [...]"** This is the only point I would like further clarification on. The way I interpret Figure 3 (left) for the audio separation task is that:
1) Increasing the number of Gibbs sampling steps $R$ does indeed generally lead to better performance (although it seems to plateau after $R>4$. In some cases, $R=4$ slightly outperforms $R=6$, though the difference may not be statistically significant.
2) However, the last column uses $R=1$ Gibbs sampling steps and a number of gradient steps $G$ that makes the runtime the same as $R=6$. The last column is the one that gives the best results overall. Doesn't this mean that for a fixed compute budget (equivalent to $R=6$) the best strategy is to only use $1$ Gibbs sampling step and use the rest for gradient steps?
Overall, I am satisfied with how the rebuttal has addressed my concerns and would be willing to increase my score to 4 (Accept). However, I think the system does not currently allow for score adjustments.
---
Reply to Comment 1.1.1:
Comment: Thank you! We are glad that our rebuttal has addressed your concerns.
Regarding your comment on the number of Gibbs steps, we agree with your interpretation. While increasing the number of gradient steps can sometimes lead to the best performance, this strategy is not consistently optimal across tasks. In contrast, increasing the number of Gibbs steps tends to yield more reliable improvements and is therefore our recommended default for practitioners, particularly when tuning is limited. We will clarify this guidance in the final version of the paper and include additional examples to illustrate this recommendation more concretely.
We also would like to thank you for your decision to increase your score to Accept. It is now possible to modify the score by clicking the edit button on your original review. | Summary: The paper presents a novel training-free guidance method that allows to samples from g(y|x_0)p(x_0) where p(x_0) is a pre-trained diffusion model distribution and g(y|x_0) is a likelihood function on the clean data. To do this, they come up with a novel approach to approximate the conditioned noisy distributions p(x_t|y) given the unconditional model, and on top of that, a new method to calculate the scores from these density approximations. The core of the method is that when sampling at diffusion noise level x_t, there is an inner loop that samples some s<t, and does Gibbs sampling from p(x_0, x_s, x_t)g(y|D(x_s)). This process defines a specific distribution over x_t, and the outer loop sampling process consists of moving through this sequence of distributions, until we hit p(x_0)g(y|x_0) at the end. To be more precise, the target distribution at each level is a mixture of the p(x_0, x_s, x_t)g(y|D(x_s)) distributions, with different probabilities for different s, and this is where the paper derives its name. The method achieves strong performance on a variety of linear and nonlinear inverse problem tasks on image data and multi-source audio separation. The method is also applicable for use with latent diffusion models. The authors also find that the Gibbs sampling procedure provides new ways to improve performance by applying more inference time compute.
Claims And Evidence: I think most of the claims are supported by evidence.
Methods And Evaluation Criteria: The evaluation criteria makes sense for the application at hand, and the method is sensible as well. The method does, however, have some complexity, and the motivation for the particular choices in the method are not entirely clear for me.
Theoretical Claims: I didn’t go through the details of the method very closely in the Appendix (although the main calculation-wise involved new part seems to be Appendix A.2.). I think I have understood the mathematical definition of the model, and it is sensible to me.
Experimental Designs Or Analyses: The main experiments on imaging inverse problems are standard benchmarks in the field, and are sound applications to focus on. One issue is that I did not find a comparison of runtime or neural function evaluation count for the different methods. As pointed out in the paper, they are able to improve results with more inference-time compute, but this also holds for the other methods (at least by increasing the amount of diffusion steps for methods like DPS, PGDM, DDNM and DiffPIR, and by increasing the amount of optimization steps in methods like Reddiff). It is useful to be able to push the results further than previous methods in absolute terms, but ideally we would also have an analysis of the different methods at different levels of compute to clearly distinguish the regime in which the method provides improvements over prior work. In case an extensive evaluation of all of the competing methods at different compute requirements is difficult, I think that it would at least be important to include the compute requirements for producing the results. Based on Table 5 in the Appendix and my understanding of the method, in pixel-space FFHQ, it seems that the method is using on the order of >2700 forward passes and >700 backward passes of the denoiser?
Another part I found a bit lacking was the discussion on the hyperparameters K, R, M and G (regular diffusion steps, Gibbs repetitions, denoising steps in the Gibbs inner loop for p(x_0|x_s) and gradient steps for approximating q(x_s|x_0,x_t)p(y|g(D(x_s))). Would it be possible to provide a bit more thorough evaluation of scaling the inference-time compute along these different axes? This would be especially interesting considering that the authors found increasing R in the imaging inverse problem case to be more useful than increasing G, and vice versa for the audio source separation case.
Supplementary Material: I briefly glanced through appendices A.1, A.2, A.3, A.4., A.5., and looked at the additional results in Table 6 and the hyperparameter choices in Table 5.
Relation To Broader Scientific Literature: The paper continues exploring new methods for applying inference-time conditions on diffusion models without changing the denoiser network. Many works in the area have focused on directly approximating the modifications needed to the diffusion model score function, e.g., through Tweedie’s formula (e.g., [1], [2]), allowing the use of standard diffusion samplers. Other works, including this one, remove the requirement of using standard diffusion sampling processes, and instead redefine the sequence of marginal distributions, using MCMC-like methods to move to lower noise levels. This paper is in the latter category, and, to the best of my knowledge, proposes a novel method in this area. The central idea of introducing a triplet of timesteps (0,s,t) to update x_t to match with the constraint, using approximations at the s stage, is new, although related to earlier some earlier work (as detailed in Appendix A.5 and the related works section “Gibbs sampling approaches”).
[1] Chung, Hyungjin, et al. "Diffusion Posterior Sampling for General Noisy Inverse Problems." The Eleventh International Conference on Learning Representations.
[2] Song, Jiaming, et al. "Pseudoinverse-guided diffusion models for inverse problems." International Conference on Learning Representations. 2023.
Essential References Not Discussed: I am not aware of key contributions in this domain that are missing, although I was not very familiar with the DAPS and MGPS algorithms cited as the closest related work.
Other Strengths And Weaknesses: I think that the idea of enlarging the design space of conditional samplers by considering triplets (x_0,x_s,x_t), performing approximations in strategic locations of the algorithm, and defining custom Gibbs samplers, appears sensible and is original. The quantitative results against baselines are good as well, and the method appears promising.
A negative (aside from the experiments mentioned before) is that the model description was somewhat difficult to read, and the paper would benefit from improving the writing. For instance, having an overview and stating the key approximations up front before going to the weeds of the algorithm description could be useful. A more clear explanation of the method and the problems it tackles in the introduction would be useful as well.
On the actual algorithm side, the motivation for the particular design choices was left a bit lacking (see questions). The details of how the s distribution is chosen is also not particularly elegant, which is not a major issue since there is some analysis on it. Due to the concerns raised, I am starting out with a weak reject, but I am open to adjust after the rebuttal.
Other Comments Or Suggestions: This is not a key concern, but it is mentioned in the Related Works that “Finzi et al., Stevens et al., Boys et al.” use that the covariance of p(x_0|x_t) is proportional to the Jacobian of the denoiser… To mitigate this, these works and subsequent ones assume that the Jacobian of the denoiser is constant with respect to x_t.” I don’t think this is exactly true: at least Finzi et al. and Boys et al. do use the network Jacobian, and as such the covariance is not constant w.r.t. x_t. This is also true in subsequent work that does not use the network Jacobian for the covariance approximation ([1] and [2]).
[1] Peng. et al. "Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance." ICML 2024
[2] Rissanen. et al. “ Free Hunch: Denoiser Covariance Estimation for Diffusion Models Without Extra Costs.” ICLR 2025
Questions For Authors: The method is quite complex, and it is a priori somewhat unclear why would we use this, instead of, e.g., the methods mentioned in Appendix A.5. Why do we introduce the intermediate s-timestep in the first place? Why do we use a mixture of timesteps s instead of a fixed s schedule?
Do you have an intuition on why is sampling s close to 0 is a source of instabilities?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thorough review. We address the main weak points/questions below. The additional tables and figures mentioned below can be found here: https://anonymous.4open.science/r/rebuttal-F9B0/rebuttal_tables.pdf
> **comparison of runtime[...]**
Although initially not mentioned (now corrected), we used the maximum diffusion steps (1000) for DPS, PGDM, DDNM, DiffPIR, and PSLD. For methods like RedDiff, DAPS, ReSample, and PnP-DM, we tuned the compute time by increasing Langevin/denoising/optimization steps until performance plateaued, ensuring optimal competitor performance. Hence, we are confident that we have pushed each competitor to their optimal performance limit. Please see Figure 1 for a figure summarizing the runtime and memory costs of each method. This figure will also be included in the revised version of our paper. Note that our method has similar memory demands as DPS and PGDM on pixel space, while in latent space its memory usage aligns with the other methods. Notably, in the latent-diffusion setting—which is particularly relevant given the prevalence of latent-space models—our method outperforms all others in speed while maintaining consistently strong performance across all benchmarks. In pixel space, our method is slower, but this comes with the benefit of consistent and strong performance. Please see our comment to Reviewer E9Pa regarding the runtime of the competitors.
> **evaluation of scaling the inference-time [..]**
We thank the reviewer for the valuable suggestion. We conducted additional experiments analyzing performance evolution with hyperparameters $K, M, G$—representing regular diffusion steps, Gibbs repetitions, denoising steps in Gibbs for \\(p(x_0|x_s)\\), and gradient steps for approximating $q(x_s|x_0,x_t) p(y|g(D(x_s))$, respectively. Specifically, we extended Figure 3 to explore increased compute along different axes for the phase retrieval task (see Figure 2). We concluded that scaling diffusion, denoising, or gradient steps have less impact for this task than increasing Gibbs steps.
> **model description [...]**
Thank you for your suggestion. We acknowledge some parts of our presentation could be more apparent. To improve accessibility, we've reorganized Section 3, which begins with a concise conceptual overview before detailing the algorithm step-by-step. We have also added the main intuition behind latent variables $x_0, x_s$, and $x_t$ and their iterative evolution through denoising, noising, and variational updates.
> **Why do we introduce the intermediate s-timestep [...]**
Our method introduces two latent variables $x_s, x_0$, enabling updates without relying on unrealistic approximations of the denoising distributions. This contrasts with approaches like DAPS, which approximate the posterior distribution $x_0 \mid x_t$ using a Gaussian distribution parameterized by a tunable covariance. Furthermore, our method leverages likelihood approximations derived from earlier diffusion steps $s < t$, a perspective that initially motivated our algorithm. Additionally, framing our approach through the lens of Gibbs sampling provides a novel dimension for performance enhancement. These three aspects—accurate latent variable integration, leveraging historical likelihood approximations, and exploiting Gibbs sampling insights—are currently underexplored and could significantly benefit the research community.
> **Why is sampling s close to 0 is a source of instabilities?” + Why do we use a mixture of timesteps s instead of a fixed s schedule?**
As we explain in Appendix B.1, sampling $s$ close to 0 leads to an accurate likelihood approximation, which is particularly important for latent diffusion. This allows us to fit the observation quickly; however, the unobserved part of the state evolves very slowly, leading to a poor reconstruction of the unobserved part of the state; see Figure 4. This occurs because the prior $q_{s|0, t}(\cdot|x_0, x_t)$ is concentrated around $x_0$ when s is close to 0. Conversely, when $s$ is far from 0, the likelihood approximation is less accurate but the prior becomes less constrained. Using a uniform mixture balances these opposing behaviors effectively, with minimal hyperparameters.
> **[...]Boys et al. do use the network Jacobian, and as such the covariance is not constant w.r.t. $x_t$**
In Boys et al., the likelihood approximation (Eqn. (11), arXiv version) explicitly assumes that the covariance—and consequently the Jacobian of the denoiser—is constant w.r.t $ x_t $. Without this assumption, differentiating Eqn. (10) would not lead directly to Eqn. (11). Indeed, Boys et al. explicitly mention: ‘For this reason, we treat the matrix $C_{0|t} $ as constant w.r.t. $x_t$ when computing the gradient.’ Similarly, [2] employs the same assumption (see Eqn. (23), OpenReview version, and the corresponding commentary).
Again, we thank the reviewer for the valuable feedback. Please let us now if you have any further questions. | Summary: To resolve the error in approximated likelihood gradient of diffusion posterior sampling (DPS) and relevant works, the paper defines a novel posterior density $p(x_t|y)$ as mixture of normalized $\hat p_s(x_t|y)= \hat p_s(y|x_t)p(x_t)$ where $\hat p_s(y|x_t)= \int \hat p(y|x_s)p_{s|t}(x_s|x_t) dx_s$ , $\hat p(y|x_s)= p(y|\mathbb{E}[x_0|x_s])$ and $0<s\leq t-1$. To sample from this intractable density, the paper also leverage a Gibbs sampling with Gaussian variational approximation.
## update after rebuttal
Regarding the first drawback in the evaluation: (2) Heavy reliance on a single metric (LPIPS) — the authors emphasize the inadequacy of pixel-wise metrics (PSNR and SSIM) for evaluating the specific use case of inverse problems. While the reviewer agrees that pixel-wise metrics can favor blurry or overly smooth outputs, these metrics remain important for assessing low-frequency information, such as color accuracy. Consequently, inverse problem solvers [1,2] typically report both pixel-wise and perceptual metrics, rather than relying on a single type. From this perspective, the reviewer believes that the evaluation in the paper places excessive emphasis on perceptual metrics.
For the other points, the major concerns have been resolved convincingly.
In summary, the reviewer thanks the authors for their efforts in the rebuttal and maintains the score of 3.
[1] Direct Diffusion Bridge using Data Consistency for Inverse Problems, NeurIPS 2023
[2] Improving Diffusion Inverse Problem Solving with Decoupled Noise Annealing, CVPR 2025
Claims And Evidence: Main claim is that proposed mixture approximation of $p_t(x|y)$ is better than using likelihood approximation of $p_t(y|x)$, which are proposed by prior works, in two perspectives: principled way and adaptability to computational budget. Paper provides evidence via extensive experiments.
Methods And Evaluation Criteria: The paper evaluates their method on widely used benchmark datasets: FFHQ and ImageNet. Furthermore, authors provide extensive types of linear and non-linear problems. Drawbacks in evaluation is that 1) missing evaluation metric FID and 2) heavy dependency of analysis on a single metric LPIPS.
Theoretical Claims: The reviewer has checked mathematical details given in the appendix.
Experimental Designs Or Analyses: The reviewer checked the validity of experimental design. It follows the prior works in diffusion-based inverse problem solvers. However, the paper does not provide an analysis on its efficiency while it introduces Gibbs sampling with variational optimization.
Supplementary Material: The reviewer has checked details on algorithm, related works, experimental details and additional results.
Relation To Broader Scientific Literature: Inverse problem is applicable to side range of scientific problems. Thus, improving the performance for inverse problem leads to further impact to scientific literature too.
Essential References Not Discussed: The paper includes essential citations and related works.
Other Strengths And Weaknesses: Strength
- Paper is written clearly and its motivation
- Performance of the proposed method is promising
Weakness
- Analysis on efficiency is missing.
- Reported metric in the main paper only focuses on a single perceptual metric, LPIPS.
Other Comments Or Suggestions: No minor comments.
Questions For Authors: - Could authors provide an analysis on efficiency of the proposed method compared with baselines? For example. runtime and memory cost.
- Could authors evaluate the FID for provided experiments?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your feedback. We appreciate your acknowledgment of the paper's clarity and the promising nature of our method. Below, we directly address your key points and questions, supported by additional results. The supplementary tables and figures mentioned can be accessed here: https://anonymous.4open.science/r/rebuttal-F9B0/rebuttal_tables.pdf.
> **Drawbacks in evaluation: 1) Missing evaluation metric FID, and 2) Heavy reliance on a single metric (LPIPS).**
First, we refer the reviewer to Appendix B.6, where we show that LPIPS is particularly suitable for our tasks, where reconstructions naturally deviate significantly from the reference images. On the other hand, pixel-based metrics (such as PSNR or SSIM) are inadequate for capturing meaningful perceptual differences in such scenarios, potentially leading to misleading conclusions. For instance, Figure 8 in the original manuscript clearly illustrates this limitation. Although reconstructions from methods like DAPS and DiffPIR appear overly smooth and lack coherence, they nevertheless achieve superior PSNR and SSIM scores, highlighting the inadequacy of these pixel-wise metrics for evaluating our specific use case.
Nevertheless, we acknowledge the importance of evaluating our method with diverse metrics, and therefore we have included FID scores in the revised evaluation. Please refer to Tables 1, 2, and 3 in the provided PDF link for detailed results. These additional results confirm that our method remains highly competitive in terms of FID scores across all three models evaluated.
> **Could authors provide an analysis on efficiency of the proposed method compared with baselines? For example, runtime and memory cost.**
Please see Figure 1 in the linked PDF, which summarizes both runtime and memory consumption across our method and the relevant baselines. This figure will be integrated into the revised manuscript.
Our method has memory requirements similar to DPS and PGDM in pixel space and aligns closely with other methods in latent space. Importantly, in latent diffusion—a highly relevant scenario given the prevalence of latent-space models—our method is notably faster than all competitors while consistently achieving strong performance across benchmarks.
Conversely, when operating directly in pixel space, our method exhibits somewhat slower runtimes compared to some alternatives. However, this increase in computational overhead is consistently balanced by improved and stable reconstruction quality across all considered tasks. Thus, we position our method as offering a beneficial trade-off, especially in scenarios where quality and consistency of results are paramount.
We highlight several important points regarding the competitors’ runtime:
- For latent diffusion, DAPS and PnP-DM perform a significant amount of Langevin steps using the gradient of the likelihood. Since the latter involves a vector jacobian product of the decoder, the runtime increases significantly. More generally, when the likelihood function is expensive to evaluate, DAPS and PnP-DM are expected to be much slower than DPS, PGDM and MGDM.
- For PnP-DM on FFHQ, we have implemented the likelihood step exactly on the linear tasks. On ImageNet however, using Langevin steps provided better results and this explains the significant increase in runtime.
Thank you very much for your feedback. Please let us if you have any further question. We kindly refer you to our responses to Reviewers awaZ and SCZR, where we’ve provided additional experiments and metrics that further support our claims. | Summary: The paper introduces Mixture-Guided Diffusion Model (MGDM) algorithm to improve likelihood approximation in diffusion models using Gibbs sampling. The approach constructs a mixture approximation of intermediate posterior distributions to address lack of closed-form likelihood scores. Data augmentation scheme uses Gibbs updates. MGDM is adaptable to computational resources by adjusting the number of Gibbs iterations, with higher number of iterations for improved performance. The method shows improved results across different image-restoration tasks and musical source separation.
This manuscript is well written!
Claims And Evidence: Please see the summary section.
Methods And Evaluation Criteria: The methods and evaluation datasets and criteria are sufficiently represented. Manuscript closely follows previously published works.
Theoretical Claims: Gibbs sampler is used as an approximate posterior for the data augmentation of the mixture to represent the stationary distribution. This is used as conditionals to obtain the posterior distribution. The proof is provided in the Appendix A.
Experimental Designs Or Analyses: The experimental designs are appropriate for the defined problem, and are similar to previously published results.
Supplementary Material: The experimental details presented in supplementary material was helpful to get better insights.
Relation To Broader Scientific Literature: The use denoising diffusion models for the Bayesian inverse problems is well studied problem. The authors have done an excellent job in comparing with SotA approaches in this domain.
Essential References Not Discussed: Not that I am aware of.
Other Strengths And Weaknesses: Few topics that authors could discuss:
1. Gibbs sampling can have convergence issues, even if you run more iterations. How do you address it?
2. Is there diversity in generated samples?
3. How to address the scaling issues of the algorithm?
Other Comments Or Suggestions: None.
Questions For Authors: Discussed above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive comments and helpful suggestions. Below, we directly address the points raised:
>**1. Gibbs sampling can have convergence issues, even if you run more iterations. How do you address it?**
We acknowledge that Gibbs sampling can indeed exhibit convergence challenges, especially in high-dimensional or strongly correlated distributions. In fact, in Appendix B.1 we show that for one specific choice of weight sequence, convergence issues arise. More specifically, when the weight sequence is designed to sample the index $s$ close to $0$ at all iterations, MGDM mixes very slowly due to the high correlation between $x_0$ and $x_s$, since $s \approx 0$. This motivates our approach of using intermediate uniform mixture posteriors, which bypasses these correlation issues and reduce dependence on the precise convergence of each Gibbs iteration, thereby enhancing overall robustness.
>**2. Is there diversity in generated samples?**
Our empirical results confirm that MGDM generates diverse samples. This is qualitatively evident in the visual samples provided in our supplementary materials; see, for example, Figure 2 and Figures 7–12.
>**3. How to address the scaling issues of the algorithm?**
We interpret the reviewer’s question as referring to memory scaling. Our algorithm’s memory footprint is comparable to DPS and PGDM, but higher than methods like DiffPIR and DDNM. This increased memory usage enables superior reconstruction quality, particularly for inpainting and outpainting (Figures 5, 6, and 8 of the original manuscript). Achieving similar quality with lower memory requirements remains an open research question. Importantly, on latent diffusion, our memory footprint is the same as all the other methods.
Thank you very much for your feedback. We kindly refer you to our responses to Reviewers awaZ and SCZR, where we provide additional experiments and metrics that further support our claims. | null | null | null | null | null | null |
Scaling Laws for Pre-training Agents and World Models | Accept (poster) | Summary: The work presents a scaling law study examining the behavior of action and observation prediction models (behavior cloning and world models, respectively). The main results characterize trade-offs between model and dataset scaling given a fixed compute (FLOPS) budget. One result evaluates world model prediction performance based on the size of the discrete token vocabulary for observations. A second result evaluates behavior cloning performance based on using discrete or continuous observation embeddings. Additional analyses justify the choice of next-token loss for downstream task performance and use other domains (text and robotics) to examine potential causes for the different scaling behaviors.
The main findings are that:
- Increasing the discrete token vocabulary of a world models leads to better scaling in model size compared to data.
- Discrete observations in behavior cloning favor smaller models with more data compared to continuous observations. Scaling laws like in LLMs are less clear (under the experimental FLOP budgets) for discrete observations.
## update after rebuttal
The replies addressed my questions, but my score was already quite positive. No changes.
Claims And Evidence: The claims and their evidence:
- claim: World models show power law scaling.
- evidence: Figures 1, 5, 6 and the associated experiments training a world model on Bleeding Edge data.
- claim: Imitation learning models show power law scaling.
- evidence: Figures 1, 7, 8 and the associated experiments training behavior cloning on Bleeding Edge data.
- claim: Tokenized imitation learning models favor smaller models with more data.
- evidence: Table 1 on the frontier fit analyses (and parametric fit for BC-Token-540). Also Figure 7.
- claim: Continuous observations for imitation learning models favor larger models.
- evidence: Table 1 on the frontier fit analyses (and parametric fit for BC-Token-540). Also Figure 8.
- claim: The trade-off between model and dataset size for world models is correlated with the number of tokens per observation.
- evidence: For the base experiments Table 1 and Figures 5 and 6.
- evidence: A test on RT-1 scaling training varied tokenization sizes and world models (Figure 11).
- claim: Next token prediction loss is a good proxy for reward achieved by behavior cloning.
- evidence: Figure 2, which re-analyzes a previous study on model scaling for behavior cloning. This shows correlation coefficients in the range of -0.94 to -0.98.
- claim: Next token prediction loss is a good proxy for image reconstruction quality achieved by a world model.
- evidence: Figures 3 and 7, which analyze image reconstruction quality (FVD and LPIPS) compared to world model size. These show correlation coefficients of 0.83 and 0.77.
- claim: The amount of token-level supervision and label super-classing explain the shallower scaling behavior of behavior cloning models compared to world models.
- evidence: Shakespeare text prediction scaling analyses that manipulate prediction tasks and class aggregation.
Methods And Evaluation Criteria: Yes. Scaling on game data in an "infinite data regime" is appropriate to measure scaling laws. The main body and supplement provide evidence that the loss in this regime is strongly correlated with reward for behavior cloning (correlation coefficient larger magnitude than 0.9) and for observation prediction (aka world modeling, correlation coefficient 0.83 for FVD and 0.77 for LPIPS). While the evidence is from different game environments, it provides reasonable grounds to extrapolate.
Ideally the games used would be more diverse (than maps from a single environment), but that is not strictly needed for this type of analysis. It may alter the scaling coefficients observed due to the heterogeneity in observations and actions, but it would be surprising to learn that other games violated these patterns in the main (and worth separate study).
The supplemental analyses of Shakespeare text and RT-1 observations add support to the core scaling claims, though I have some modest concerns about the methods and how well they would translate across model tasks (see below).
Theoretical Claims: No proofs are in the paper.
Experimental Designs Or Analyses: Re-analysis of behavior cloning scaling in prior games. No obvious issues, though I am not familiar with the original paper and not sure if the assumption of an infinite data regime matches.
The scaling analyses for behavior cloning and world modeling in Bleeding Edge. The questions below addresses specific points. Most pressing is how to interpret the BC comparison given the methodological fitting differences between the discrete and continuous cases. It is not clear what conclusions we can draw from differences in the methods and given the lack of loss saturation given the FLOPS budget for the discrete case.
World model fitting extrapolation. No obvious issues, but it returns to the concern about drawing conclusions in the case where compute requirements prevented full testing.
Character prediction analyses of proposed mechanisms for different scaling behavior. These were a useful supplement to add evidence in favor of the hypotheses around lack of supervision and token granularity to explain the WM vs BC differences. My only (minor) concern is methodology: the fits are all estimated using parametric fit, despite preferring frontier fit for most of the other results in the paper. This is particularly true since the BC-CNN scaling results seem sensitive to this choice.
RT-1 observation encoding results to explain world model scaling behavior in model parameters compared to tokenization vocabulary size. These are helpful to have for more context, but raise questions about how strong the relationship is.
Supplementary Material: Yes. All of it.
Specifically useful parts:
- WM pretraining loss compared to outcome metrics (FVD and LPIPS).
- Dataset details to understand the diversity and nature of the data source.
- I skimmed information on training hyper-parameters, the character-level and RT-1 analyses.
Relation To Broader Scientific Literature: The prior literature on scaling laws and methodological improvements are connected to the choice of methods in this paper. The results in the paper are contrasted to prior work on auto-regressive modeling in video and images, where the newer methodology contradicts past findings. The paper also discusses prior scaling law research in the embodied context, including the work that is re-analyzed for behavior cloning.
Essential References Not Discussed: No references were missing that I consider essential.
Other Strengths And Weaknesses: # strengths
- clarity: Explains the methodological choices made and their application that helps introduce readers less familiar with the technical aspects of the neural scaling law literature.
- rigor: The main claims are supported by convergent lines of evidence aside the main results. This includes the correlation studies for BC and WM with downstream task performance and follow-up experiments around the scaling variation hypotheses in the Shakespeare and RT-1 domains.
- significance: Establishing trade-offs in scaling architectures for embodied AI (at least in games) is important as these techniques see wider adoption. The methodological rigor and references here will help that as well.
# weaknesses
- clarity: The conclusions about WM scaling are hard to understand. The meaning is obstructed by jargon. There are many ideas being packed in, but it's hard to understand the overall narrative.
- It would help to make some statement along the lines of "When data is limited, favor scaling X." "When model size is constrained, favor scaling Y." for each case of: the WM tokenizer vocabulary size, BC tokenizer method.
- Similarly, the BC scaling conclusions are difficult to understand as stated.
Other Comments Or Suggestions: I've included any non-critical questions in this section.
- Figure 1 can be misleading around BC scaling. Multiple times I've misread the figure due to the magnitude differences of losses for the tokenized models (roughly 0.4 to 0.5) and continuous models (3.5 to 4.0). It may help to more explicitly call this out or somehow make it more apparent that the CNN losses are roughly an order of magnitude higher. This difference is important when considering the practical implications of the work for downstream tasks.
- More generally, it may help to provide some remarks on what this implies for embodied AI training.
- What are the implied best practices from these results?
- It seems that the conclusion is training a discrete model is best with as much data as possible is best (modulo saturation concerns for a given compute budget).
- Under what conditions of data limitation (for a given compute budget) does it become beneficial to use a continuous model?
- Figures 2 through 10 are too tiny to read. It would help to make the key figures larger and move less crucial points to the appendix.
- Perhaps the figures justifying the correlation of infinite data loss to outcomes.
- "This matches the magnitude of decrease seen in Table 1 from 0.66 to 0.32, indicating that the proposed mechanisms explain our findings." (line 370)
- This was confusing as stated. After some looking I believe it's intending to reference C^a for BC-CNN under Frontier Fit and BC-Token-540 under Parametric Fit.
Questions For Authors: - [Q1] What is the conclusion to draw for image encoders being discrete vs continuous?
- The CNN (continuous) model has higher loss and asymptotes being more shallow. Is this connected to the size of the latent representation?
- The tokenized (discrete) model has not started plateauing in the same way.
- [Q2] Is the use of parametric fit distorting these results at the highest FLOPS due to models not saturating? Did the discretization have too many tokens?
- [Q3] How does inference speed scale with model size?
- This is not crucial, but if the data was already on hand it would strengthen the results.
- Specifically this seems important in the behavior cloning case, where the learned model would need to run inference at roughly 10 Hz (to keep up with a game).
- [Q4] What is the correlation coefficient for Figure 11?
- The association looks weak to the eye. And there is also a potentially small effect size given the very compressed scale.
- [Q5] Is there any analysis on the discretized vocabulary size of BC that would be comparable to the world model tokenization size experiments?
- Given the lack of saturation and the compression ratio results, it would be helpful to have a similar parallel analysis for the BC case. This is a "nice to have" that would strengthen the overall narrative.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Many thanks for taking the time to review our paper in such detail, and raising several points we had not considered. We are pleased to see the value of our contributions has been recognized. Below, we respond to your main questions, followed by your ‘other comments’ and finally several points raised elsewhere in the review.
- __Q1.__ Discrete vs continuous encoders for BC.
Thanks for this comment – we appreciate the reviewer’s insight into trying to reconcile the two plots, which is something we agree is worth understanding in more depth. First, as discussed in __Other comment 1.__ caution should be applied as the two BC losses are in fact subtly different, e.g. one should not say the BC-CNN loss is ~10x higher than BC-Token’s.
However, the fact BC-CNN asymptotes quicker, means those models are approaching the natural entropy in the data ($E$ in Eq. 7) at modest compute budgets, while BC-Token does not show signs of this. One hypothesis is that BC-CNN models are therefore more compute efficient, though an alternative hypothesis is that differences in loss modelling cause this.
We propose to investigate this further when training the new smaller BC-Token models requested by Reviewer TxDt, by computing the equivalent BC-CNN loss for the new models, to better understand how these two losses align. Again, thanks for the suggestion.
- __Q2.__ Parametric fit distorting results
We assume the reviewer is referring to the BC-Token experiment? We expect that following Reviewer TxDt’s recommendation to train a set of smaller models to encourage saturation, we will be able to provide coefficients for the frontier fit method resolving this issue.
- __Q3.__ Inference speed and model size
We can confirm all BC-Token and BC-CNN models considered in our experiments can be run at >10Hz on a V100 GPU. We do not currently have numbers to hand on how inference speed changes with model sizes.
- __Q4.__ Figure 11 correlation
The computed correlation comes out as 0.61, which is considered a moderate strength relationship. We will add this to the figure caption. The high variance comes since a set of models are trained for each point – one model in the family failing to converge properly affects the coefficient entirely. This was something that occurred more often for models trained on larger tokens-per-image. We believe with further optimizing of training schedule these models could converge more consistently and the pattern would emerge even stronger.
- __Q5.__ Investigating effect of tokenization of actions on coefficients
This is a nice suggestion we had not thought of! Does the action tokenizer effect coefficients in similar ways to image tokenizers? Whilst most action dims are natively discretized (buttons), the natural lever to play with here would be number of bins for the continuous joystick dims. But we could also consider grouping together various button combinations to require less tokens per action prediction. We feel this goes a little beyond the current scope of the paper but would be excited to see investigation of this in follow up work.
- __Other comment 1.__ Figure 1 with differing loss scales
We apologize for not making this clearer. In fact, the loss magnitudes are not directly comparable between BC-Token-540 and BC-CNN. Each model optimizes a subtly different losses; BC-Token-540 predicts action dimension-by-dimension, while BC-CNN produces predictions for each action dimension independently (Section 3.2). Our implementations also aggregate the losses in different ways. We will edit the figure caption to make this clearer.
- __Other comment 2.__ Small figures
Apologies, we will expand figure sizes in the next version.
- __Other comment 3.__ Line 370 unclear
Apologies for this, you were correct in your guess. We will clarify.
- __Comment 1.__ Re-analysis of BC ... unsure about infinite data assumption
To confirm, Tuyls et al. used pre-trained policies so could generate unlimited data and remain in the infinite data regime.
- __Comment 2.__ Character prediction experiments used parametric fit
We agree frontier loss would be preferable. While we could have used frontier fit for the Dense loss, this did not appear possible for spare loss super-classed (Figure 10). We favored consistency in our fitting method across these experiments.
- __Comment 3.__ Conclusions about scaling hard to understand
Thanks for this comment. We have focused on presenting nuanced details from the scaling analysis and have so far not offered clear conclusions for practitioners. Would the reviewer agree with the below concrete recommendations?
1) When training BC-Token models, model sizes should be substantially smaller than for BC-CNN.
2) For WM-Token models, increasing tokens-per-image of the tokenizer should be done in parallel with increasing model sizes.
3) As per response to __Q1.__ we may be able to advise a preference for BC-CNN models, but await our analysis contrasting BC-CNN and BC-Token losses.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed replies! I'll only remark on open topics in my mind.
In general I do not share as large a concern about establishing the level of correlation to more complex environments. This will likely be lower than toy environments (for smaller foundation models), but unlikely to be so insubstantial to render the models useless. While that _may_ be true, it would merit a separate publication in it's own right (and perhaps is a follow-on to consider with Bleeding Edge).
# Comment 3
I'm most interested in the results from the third suggestion, as that seems the most interesting outcome of the experiments that have been done so far.
The other two claims are sensible and would be welcome to highlight.
# Q1
I had not realized this difference. I would like to see the new experiment results and loss scaling to better understand the implications. As mentioned above, I found it difficult to translate the results into whether tokenization or CNN was better at these model scales. Understanding this kind of scaling behavior is very important with the growth of multi-modal foundation models in general (which often incorporate vision as an input modality). At least the matched scaling seems important to verify before acceptance.
# Q5
The other idea that came to mind was to apply a different discretization scheme. The FAST tokenizer may be one option to consider for the discrete input spaces (https://arxiv.org/abs/2501.09747). Note that really it's a recipe for many such frequency domain tokenization approaches, leaving the choice of compression technique as a free parameter. | Summary: This paper investigates the scaling laws in embodied AI. Specifically, this paper focuses on the infinite data regime and generative pre-training objectives, which include behavior cloning and world modeling. The scaling laws are observed in the following two cases through the experiments. One is world modeling with tokenized observations and actions while the other is behavior cloning with one continuous encoding per observation. Experiments and analysis are conducted in the domains of video games and robotics.
Claims And Evidence: My primary concern is that this paper is built upon the assumption of an infinite data regime, where samples are not trained more than once. While this assumption holds for LLM on NLP pre-training, is it applicable in the cases used in this paper, such as specific video games or robots in a fixed environment?
Methods And Evaluation Criteria: This paper claims the pre-training loss could be used to represent the downstream performance and thus serve as the indicator for scaling laws. However, this connection is simply obtained from small-scale environments like Atari and NetHack. Extending this observation to more complex scenarios (e.g., video games and robotics) has no clear evidence or guarantee. Hence, whether the observed scaling laws can represent the real robot performance remains unclear.
Theoretical Claims: No theoretical claims are provided in the main paper.
Experimental Designs Or Analyses: This paper only uses generative pre-training loss for evaluation. How this proxy reflects real performance is lacking.
Supplementary Material: I have read the appendix.
Relation To Broader Scientific Literature: Compared with previous works, this paper chooses to analyze the scaling laws of embodied AI by generative pre-training loss in the infinite data regime.
Essential References Not Discussed: To the best of my knowledge, the references are sufficiently covered.
Other Strengths And Weaknesses: 1. The overall paper is not well organized. For example, all figures in this paper are too small.
2. The subsections in this paper are also not well structured. For example, it is better to put Sec. 2.2 into section 3 due to Sec. 2.2 can be viewed as the foundation of the method. Also, putting Sec. 3.3 to experiment section would let the reader easy to follow. Overall, the paper flow and the figure layout are suggested to be improved.
Other Comments Or Suggestions: Please refer to the issues raised above.
Questions For Authors: Please refer to the issues raised above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper. Please see below our responses.
- __Comment 1.__ Infinite data regime assumption
Thank you for drawing attention to this important detail. To clarify, all experiments in our paper for both domains (video games and robotics) are conducted in the infinite data regime (Section A.3.1 computes the amount of FLOPs that would violate this assumption in our set up for the main experiments).
Investigation into scaling laws outside the infinite regime is an active research area in LLM research (Scaling Data-Constrained Language Models). As we understand, the current thinking is that scaling laws do still exist in this setting, but are influenced in adverse ways by repeated epochs.
We believe the reviewer is correct to call out this as a line of research very relevant to embodied AI, which typically falls within the data-constrained regime. We believe it falls outside the scope of a first investigation into scaling laws in embodied AI, for which we have chosen to focus our investigation on the effect of tokenizer, task and architecture.
- __Comment 2.__ Pre-training loss and downstream performance
We have done our best to frame all claims in the paper as optimizing for loss in the pre-training phase of agents, rather than for downstream performance, as these are the concrete results we have. We have further provided conceptual arguments and experimental evidence on the link between pre-training loss and performance. Note the Nethack and Atari experiments, while being of limited complexity, are not necessarily small-scale (up to 10^18 FLOPs).
Providing further evidence effectively for our dataset, environment and setup, we feel would require resources beyond the scope of this paper. We would require implementing a form of post-training phase to maximize some reward signal, and engineer an automated, distributed solution to enable rollouts in this game at large scale (something Bleeding Edge does not natively do).
In general, we agree with the reviewer that the link between pre-training loss and online performance is important enough to deserve further study, but leave this to a separate investigation.
- __Other comments__
Thank you for these suggestions. Should we be fortunate enough to have the paper accepted, we will take your suggestion and use the allowed extra page to enlarge figures for clarity. Regarding the paper organization, please confirm whether other reviewers support your proposed changes. We would then be happy to reorder the paper as you suggest in the next version.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for providing the rebuttal. I've read the author's response and comments from other reviewers. I have no further questions at this time. I will increase my original rating to 3. | Summary: This paper investigates the existence and characteristics of scaling laws in embodied AI tasks, specifically world modeling (WM) and behavior cloning (BC), drawing parallels to scaling laws observed in large language models (LLMs). Through extensive experiments on large-scale video game datasets (e.g., Bleeding Edge) and robotics data (RT-1), the authors demonstrate that power-law relationships between model size, dataset size, compute, and pre-training loss also apply to WM and BC. Key findings include:
- Scaling laws for WM are influenced by tokenizer compression rates. Higher compression rates lead to more reliance on dataset size.
- BC with tokenized observations requires prioritizing dataset scaling under modest compute budgets, while BC with CNN-based architectures favors model scaling.
- Small-scale language modeling experiments validate mechanisms behind observed scaling phenomena (e.g., sparse supervision and target granularity).
Claims And Evidence: All claims are supported by thorough experiments that are carefully designed.
Methods And Evaluation Criteria: Yes
Theoretical Claims: N/A
Experimental Designs Or Analyses: All experimental are carefully designed and conducted.
Supplementary Material: Yes, Figure 16.
Relation To Broader Scientific Literature: Scaling laws play a crucial role in the development of large AI models, such as LLMs. This paper makes significant contributions to the understanding of scaling laws in agent learning by utilizing expressive architectures. These insights will be instrumental in unifying scaling laws across various domains
Essential References Not Discussed: To the best of my knowledge, all related works are cited/discussed.
Other Strengths And Weaknesses: Strengths:
- The work bridges a critical gap in understanding scaling laws for embodied AI, extending principles from LLMs to WM and BC. This is timely, given the growing interest in scaling generative models for robotics and interactive agents.
- The paper validates findings across diverse datasets (video games, robotics) and architectures (tokenized vs. CNN-based). The inclusion of small-scale language experiments to explain mechanisms adds methodological depth.
- The analysis of tokenizer compression rates and architecture choices provides actionable guidelines for optimizing compute allocation in real-world applications.
- I really like the meta-analysis linking pre-training loss to downstream metrics (e.g., FVD/LPIPS for WM), which strengthens the case for using pre-training loss as a proxy in scaling studies.
Weaknesses:
- The existing datasets for robotics lack sufficient diversity, which may limit the value of the conclusions presented in the paper, particularly when downstream tasks emphasize generalization capabilities [1]. The RT-1 robotics dataset’s small scale also raises questions about extrapolation to large-scale datasets in robotics like OXE dataset.
- The observed effects of tokenizer compression rates and target granularity lack theoretical grounding. While empirical results are compelling, deeper mechanistic explanations would strengthen the work.
[1] A Taxonomy for Evaluating Generalist Robot Policies
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your review. We are delighted to have successfully communicated the value of our study. We agree that extending principles from LLMs to embodied AI tasks is a timely and important avenue of research. Allow us to respond to your comments below.
- __Comment 1.__ RT-1 dataset has limited diversity and is small scale. How do conclusions apply to tasks where generalization is important, and larger scale datasets.
We recognize that the RT-1 tasks, policies and visuals contain only limited diversity. On the other hand, the Bleeding Edge dataset contains a huge amount of diversity, with a pool of seven maps, tens of thousands of demonstrators of differing skill levels.
While these two datasets are from differing domains, we did not observe contradictions between the two sets of experiments. We hope that analyzing the results in combination will give researchers enough evidence that similar patterns would also emerge in a large diverse robotics dataset.
- __Comment 2.__ Effects of tokenizer compression rates and target granularity lack theoretical grounding
Thank you for this comment. This is one of the directions for future work we are also most excited about, and have started very initial experiments in this direction. We sketch our current thinking on this issue in case of interest.
We have begun considering the case of lossless compression, where more-tokens-per observation (lower compression) should make next-token-prediction easier, both in the sense of the pattern being simpler (less parameters required) and less stochastic noise (less data required). We suspect this leads to lower values of both alpha and beta. But the key question is which shrinks faster (since their ratio is the thing that matters). Which might be harder to reason about without further assumptions.
One thing that makes analysis tricky, is that losses may not be consistent across tokenizers even for the lossless case, and in reality, tokenizers become more lossy at higher compression rates.
Regarding this paper submission, we believe that a deeper analysis of compression rates is best left to a separate paper, and perhaps domains more straightforward than embodied AI. Should you be interested in exploring this direction further, please reach out following the review period. | Summary: The paper explores scaling laws in embodied AI, especially for the pre-training stage of world models and agent behavior. The authors show that power laws similar to those in LLMs are observed in world modeling and behavior cloning, but with coefficients influenced by the tokenizer, task, and architecture. The study provides insights into optimal model and dataset sizing for these tasks.
Claims And Evidence: * Scaling laws similar to those in LLMs can be observed in world modeling with tokenized observations and actions → Partially insufficient
- The authors show extensive experiments to prove the scaling laws for training loss, but only minimal examples of how this training loss translates to real world performance on embodied AI tasks. Additional tasks to show the training loss as a proxy would make this claim stronger.
* The optimal trade-off between model and dataset size in world modeling is influenced by the tokenizer’s compression rate → Sufficient
- Two examples of tokenizers are used to support this claim. Additional examples using the small RT-1 dataset also support this claim, but the sizes used are quite small due to the limited size of the dataset.
* Scaling laws for BC with tokenized observations are harder to observe under modest compute budgets. The optimal trade-off favors smaller models and more data → Partially insufficient
- The authors claim that models with size > 2M params don’t saturate over the flops range considered and use the parametric fit instead of the frontier fit method. Can additional models with smaller sizes <2M be used to show that the parametric fit and frontier fit curves match for smaller sizes?
* Scaling laws similar to those in LLMs can once again be observed in BC with one continuous encoding per observation → Sufficient
- The power law curves for training loss with BC-CNN show a strong trend as claimed.
Methods And Evaluation Criteria: The benchmark dataset used from the game Bleeding Edge is a reasonable choice for the proposed claims, but might not be sufficient to show that the claims also hold for other examples of embodied AI - especially real world tasks with a long tail of difficult scenarios. The authors show that the training loss has a strong correlation with the final task performance for the few tasks studied, but it’s not clear if these will also hold for other general cases of BC and world modeling (like discussed in [1]), as is even discussed for LLMs in [2].
Also, previous studies like [3,4] have shown significant impact of dataset quality on downstream performance for agents. Also, it would be very useful if the 7 Maps dataset is available publicly to allow reproducibility of this research.
[1] Codevilla, Felipe, Eder Santana, Antonio M. López, and Adrien Gaidon. "Exploring the limitations of behavior cloning for autonomous driving."
[2] Liu, Hong, Sang Michael Xie, Zhiyuan Li, and Tengyu Ma. "Same pre-training loss, better downstream: Implicit bias matters for language models."
[3] Bronstein, Eli, Sirish Srinivasan, Supratik Paul, Aman Sinha, Matthew O’Kelly, Payam Nikdel, and Shimon Whiteson. "Embedding synthetic off-policy experience for autonomous driving via zero-shot curricula."
[4] Belkhale, Suneel, Yuchen Cui, and Dorsa Sadigh. "Data quality in imitation learning."
Theoretical Claims: N/A all the claims in the paper are empirical and supported by experimental results.
Experimental Designs Or Analyses: The experiments are well designed and detailed to support the claims for the proposed domain of agents and world modelling for the Bleeding Edge game. Additional evidence of final game performance of the agent and world model using the proposed pre-training setup to show the strong correlation between the training loss and the game performance will help solidify the claims further.
The extension of these claims however to the entire embodied AI domain for agents and world models aren’t really conclusive - especially because the variation between optimal dataset and model sizes varies quite a lot between the different tasks and tokenizer.
Supplementary Material: Yes, the appendix at the end of the submission is a useful addition - especially the training and other implementation details help with the understanding of the paper and the experiments conducted to obtain the results.
Relation To Broader Scientific Literature: The contributions of the paper, especially showing that the scaling laws from LLMs extend to embodied AI are significant, and will accelerate the development of models for the community by providing suggestions for optimal model and dataset sizes. The claims however feel stronger than the actual results, especially since the scaling laws aren’t really general and influenced by the specific task, tokenizer and architecture.
Essential References Not Discussed: None that are obvious or very well known.
Other Strengths And Weaknesses: Strengths
- The authors successfully demonstrate that power laws similar to those in language models also apply to world modeling and behavior cloning. This extension is crucial as it guides researchers in optimizing resource allocation for embodied AI tasks.
- The paper employs a rigorous methodology, using clear definitions and complementary fitting methods to establish scaling laws.
- Key findings include the influence of tokenizer compression rate on optimal model size in world modeling and the preference for smaller models with more data in behavior cloning with tokenized observations.
Limitations:
- While the paper justifies using pre-training loss as a proxy for performance, this approach has limitations. Studies have shown mixed results regarding the correlation between pre-training loss and downstream performance.
- The paper does not fully address domain-specific challenges such as physical interaction complexity and the lack of long tail scenarios.
- The curves generated using parametric fit definitely support a stronger claim than those with the parametric fit.
- The paper could benefit from more detailed analysis of dataset diversity and potential biases.
- The dataset used for the experiments is proprietary, as well as the computational requirements for reproducing these results are substantial, limiting accessibility for researchers with fewer resources.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Additional evidence to support the claim that the pre-training loss correlates strongly with the final performance metrics will definitely be useful. Especially because the domain of embodied AI with agent modelling suffers from error accumulation over steps, resulting in out-of-distribution states and sub-optimal performance in the long tail of scenarios.
2. The authors claim that the scaling laws for BC with tokenized observations are hard to observe under the current compute budgets. Could additional evidence using smaller models or a different dataset be added to show that the scaling laws actually satisfy the authors’ claims?
3. Figure 9 shows the extrapolation of the scaling laws to a much larger model - additional examples of large scale models for the other claims will also help strengthen that they are satisfied over a wide range of model scales. (I do understand that this could require significant training time, so will only be a positive if it can be added)
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review, we are pleased your judgement of the paper comes out on the side of acceptance and agree that your suggestions would further improve the paper. Due to logistical constraints, we are not able to complete all of these requests, but commit to completing the smaller scales BC-Token experiments (which we agree is an excellent idea), before any camera-ready deadline. Below we respond to your three primary questions, and several further key comments we noted.
- __Q1.__ Additional evidence … to show the correlation between the training loss and game performance
We have done our best to frame all claims in the paper as optimizing for loss in the pre-training phase of agents, rather than for downstream performance, as these are the concrete results we have. We have further provided conceptual arguments and experimental evidence on the link between pre-training loss and performance but agree this should continue to be explored through future work.
Doing this effectively for our dataset, environment and setup, we feel we would require resources beyond the scope of this paper. We would require implementing a form of post-training phase to maximize some reward signal, and engineer an automated, distributed solution to enable rollouts in this game at large scale (something Bleeding Edge does not natively do).
As such, similar to the LLM reference noted by the reviewer (their [2]), we believe this is best left to a separate paper.
- __Q2.__ Smaller models in the BC-Token experiments
Thank you for this very sensible suggestion! Our original protocol had aimed for consistent model sizes across tasks and tokenizers, but on reflection, we agree that smaller models could be used for the BC-Token experiments. Logistically this will be difficult to complete within the rebuttal period (very small models do not fully utilize GPU FLOPs and hence are very time inefficient), but we are happy to commit to completing this before any camera-ready deadline (should we be fortunate enough to have the paper accepted).
- __Q3.__ Additional extrapolation experiments
We agree with the spirit of this request – the gold standard test for scaling laws is how well they predict orders of magnitude out. As the reviewer mentions, this would require more compute than used in our current paper, which is a hard for us to commit to. We hope our current results, which span around three orders of magnitude, are sufficiently interesting for much of the embodied AI community working which has not yet advanced to the model sizes in LLMs.
- __Comment 1.__ Claims feel stronger than results … since the scaling laws aren’t really general and are influenced by the specific task, tokenizer and architecture
Please let us know if there are specific wording changes you’d recommend. We aimed to write the paper in a way that would engage researchers across embodied AI, whilst avoiding overclaim or hype. Power laws did consistently emerge across the two domains tested, and one of our main contributions is that task, tokenizer and architecture _do_ impact coefficients. This is an important departure from LLM research, where datasets, tokenizers and architectures usually only see minor variations.
- __Comment 2.__ Dataset is proprietary, reproducing these results are substantial, limiting accessibility for researchers with fewer resources
The Bleeding Edge dataset was accessed under a data sharing agreement and unfortunately, we do not have agency to share ourselves. On the other hand, the RT-1 experiments are fully reproducible. This follows recent high-impact embodied AI work (Genie: Generative Interactive Environments) which described a lightweight environment for other researchers to experiment in. Note our LLM experiments are also designed to help researchers capture the essence of challenges we identified in a more accessible modality.
---
Rebuttal Comment 1.1:
Comment: Thanks for the replies to the comments and questions. Overall, I don't think that I will change my rating and it will still be a weak accept as I feel that the paper will be a useful addition to the community, but it's not directly clear about the generalisation and trasferability of the research.
Q1 - I agree that this will be a useful, but not necessary addition to the paper to strengthen the claims, but can be left for a follow up like the authors say.
Q2 - Thanks, curious to learn more about the results.
Q3 - I understand the difficulty of running further scaling experiments, and since this is only an additional point which would help, we can skip this for now.
Comment1 - I appreciate the transparency in the results, and acknowledge that the authors do discuss the impact of task, tokenizer and architecture, and do not contest this. I just wanted to bring up my concern that the scaling laws don't transfer very well when one or more of these factors are changed, and so can't directly be used across different users unlike LLM results.
Comment2 - Thanks for the clarification, but unfortunately complete reproducibility of these research findings isn't possible, which would have definitely made the claims of the paper feel stronger. | null | null | null | null | null | null |
Leveraging Model Guidance to Extract Training Data from Personalized Diffusion Models | Accept (poster) | Summary: The paper proposes a model guidance to extract fine-tuning data, that leverages its base pre-trained model as a guidance. The proposed method “model guidance” can sample from the learned distribution of the fine-tuned models via simple guidance techniques. They further propose a new clustering algorithm for sampling within high-probability regions by constructing an image graph. Experiments on various datasets confirm its effectiveness, supported by ablation studies, and real-world applications, such as using checkpoints from the community.
Claims And Evidence: Yes, the paper’s claims are supported by experiments. But some parts are missing; see Weaknesses for details.
Methods And Evaluation Criteria: Yes, the paper’s methods are supported by experiments.
Theoretical Claims: Yes, the paper’s equations are reasonable.
Experimental Designs Or Analyses: Yes, the paper’s methods are supported by experiments. But some parts are missing; see Weaknesses for details.
Supplementary Material: Yes, I read all supplementary material.
Relation To Broader Scientific Literature: This work is the first try to leverage a base pre-trained model to extract its fine-tuning dataset. Most existing works did not distinguish this from pre-training.
Essential References Not Discussed: No, to the best of my knowledge, there are no related works that should be further compared or discussed.
Other Strengths And Weaknesses: **Sterngth**
- Model guidance is straightforward and aligns well with the current trend of using the same base model in real-world applications.
- The paper is easy to read and follow.
- Experiments are provided on various models and concepts, and the experiments using checkpoints from communities like Hugging Face were particularly good.
**Weakness**
- The paper assumes that the training caption is given. Although I found the training caption extraction part in the Appendix, I couldn't find the training data extraction result using the extracted prompt. In my opinion, this part is crucial for highlighting the paper's practical motivation, so the results should be reported.
- The model guidance method sounds reasonable, but the clustering approach is not very intuitive and novel. An ablation study on the clustering method is essential, and further explanations, such as qualitative examples, would be helpful.
Other Comments Or Suggestions: Please refer to Questions or Weaknesses.
Questions For Authors: How does the method perform on recent models, such as FLUX? In the case of DreamBooth, better models tend to show higher generalizability rather than memorization. I would like to see results related to this.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful feedback!
1. For Weakness1:
> I couldn't find the training data extraction result using the extracted prompt. In my opinion, this part is crucial for highlighting the paper's practical motivation, so the results should be reported.
Our default setting uses captions as accessible, aligning with previous works [1-3]. We observe that captions are accessible in many cases, particularly on real checkpoints on Civitai. This is especially evident for DreamBooth, where several special tokens within the training captions (such as "a sks dog") are always available to ensure the correct application of the models.
Nonetheless, there are scenarios where captions are not accessible, and we admit that data extraction result using the extracted prompt is helpful. Therefore, we update the results using the extracted captions (3 words in length) shown in Tab 5.
| | AS | A-ESR($\tau$=0.6) |
|:-------------------------------:|:------:|:-----------------:|
| FineXtract (Full Caption) | 0.501 | 0.35 |
| FineXtract (Extracted Caption) | 0.314 | 0.15 |
| FineXtract (Empty Caption) | 0.192 | 0.00 |
| CFG (Full Caption) | 0.434 | 0.23 |
| CFG (Extracted Caption) | 0.308 | 0.08 |
| Direct Text2img (Empty Caption) | 0.146 | 0.00 |
We observe that though the extraction success rate decreases compared with extraction with full captions, it is significantly stronger than performing extraction without any caption information. It confirms the effectiveness of the proposed caption extraction approach. Additionally, FineXtract achieves notably higher extraction performance under extracted captions compared to the baseline.
[1]Carlini N, Hayes J, Nasr M, et al. Extracting training data from diffusion models[C]//32nd USENIX Security Symposium (USENIX Security 23). 2023: 5253-5270.
[2]Somepalli G, Singla V, Goldblum M, et al. Diffusion art or digital forgery? investigating data replication in diffusion models[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023: 6048-6058.
[3]Somepalli G, Singla V, Goldblum M, et al. Understanding and mitigating copying in diffusion models[J]. Advances in Neural Information Processing Systems, 2023, 36: 47783-47803.
2. For Weakness2:
> An ablation study on the clustering method is essential, and further explanations, such as qualitative examples, would be helpful.
Without the clustering method, the attacker can only generate a number of data points but cannot determine which ones could lead to a successful extraction. This is reflected in our ablation study in Fig 4(b), where only 1 generated image per training image is used. The result is presented again in the following table to emphasize how the clustering component significantly improves the AS and the A-ESR.
| | AS | A-ESR($\tau$=0.7) | A-ESR($\tau$=0.6) |
|:-------------------------------:|:------:|:-----------------:|:-----------------:|
| CFG (without Clustering) | 0.280 | 0.00 | 0.10 |
| FineXtract (without Clustering) | 0.338 | 0.05 | 0.13 |
| CFG (with Clustering) | 0.434 | 0.08 | 0.23 |
| FineXtract (with Clustering) | 0.501 | 0.15 | 0.35 |
3. For Questions:
> How does the method perform on recent models, such as FLUX?
We update the results using FLUX.1 [dev]. The DreamBooth fine-tuning on FLUX.1 [dev] requires more than 100GB per GPU, which we are currently unable to run. However, we have conducted experiments in the LoRA scenario using the official scripts provided by Diffusers. The training iterations are fixed at 150$N_0$, as suggested by the repository, and other hyperparameters remain the same as those in the repo. We compare our method with the baseline (CFG), and both methods show the best performance when the guidance strength $w’$ is set to 3.0, with the improvement being consistent.
| | AS | A-ESR($\tau$=0.7) | A-ESR($\tau$=0.6) |
|:--------------:|:-----:|:-----------------:|:----------------:|
| CFG+Clustering | 0.407 | 0.03 | 0.20 |
| FineXtract | 0.496 | 0.30 | 0.43 |
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for the detailed response and additional experiments. My concerns are mostly addressed, and thus raise the score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for acknowledging our work! We're encouraged that most concerns have been addressed and truly appreciate the time and effort spent on the review. | Summary: This paper introduces FineXtract, a framework for extracting data used in the fine-tuning of personalized diffusion models. The authors propose a parametric approach to approximate the fine-tuning data distribution by extrapolating the original output distributions of both the pre-trained and fine-tuned models. Subsequently, a clustering algorithm is applied to identify probable fine-tuning images from the generated samples. The proposed method is evaluated across various fine-tuning scenarios, including real-world cases, achieving an Extraction Success Rate of approximately 20%.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Nan
Experimental Designs Or Analyses: Yes, I checked section 5 and all sections in the Appendix.
Supplementary Material: Yes, I reviewed all sections in the Appendix.
Relation To Broader Scientific Literature: Previous studies on data extraction have primarily focused on pre-trained diffusion models, while research on extracting data from personalized diffusion models—particularly few-shot fine-tuned models—remains scarce. This work introduces a novel approach by constructing the fine-tuning data distribution through extrapolation between the pre-trained and fine-tuned models. To the best of my knowledge, this is a new perspective that is likely to provide valuable insights to the research community.
Essential References Not Discussed: Nan
Other Strengths And Weaknesses: Strengths: The paper is overall well constructed and presented. The proposed approach is principled. The empirical study is thorough.
Weaknesses: See questions.
Other Comments Or Suggestions: Nan
Questions For Authors: I have two questions regarding the empirical study:
1. Caption Accessibility Assumption: As the authors also noted, the assumption that training captions are fully accessible seems somewhat strong for practical scenarios. It is commendable that the paper discusses strategies for partially extracting and extending captions when they are unavailable. However, did the authors evaluate FineXtract using these extracted captions? If so, how does the performance compare to using ground-truth captions?
2. Hyperparameter $\lambda{\prime}$ Selection: It appears that the key hyperparameter $\lambda{\prime}$ is selected via grid search (among 1.0, 2.0, 3.0, 4.0, and 5.0), and its optimal value varies across different fine-tuning settings and data extraction approaches. Could the authors provide further discussion on the underlying principles or insights for identifying $\lambda{\prime}$ or setting the grid search range? For example, I suspect that the optimal $\lambda{\prime}$ depends on the number of fine-tuning iterations. Would it be possible to fix the fine-tuning dataset and the data extraction approach, and systematically analyze how the optimal $\lambda{\prime}$ changes with the number of fine-tuning iterations? Furthermore, when the fine-tuning dataset and iteration count are fixed, does a larger optimal $\lambda{\prime}$ suggest that the model or fine-tuning approach is more prone to overfitting or memorization of the training data?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful feedback!
1. For Questions1:
> It is commendable that the paper discusses strategies for partially extracting and extending captions when they are unavailable. However, did the authors evaluate FineXtract using these extracted captions? If so, how does the performance compare to using ground-truth captions?
As the caption extraction algorithm is not the core contribution of this paper, we focus on the extraction with full captions in our main experiments, consistent with the settings of previous works [1-3].
Nonetheless, we admit that evaluating FineXtract using the extracted captions is highly valuable for fully assessing its effectiveness. Therefore, we update the results using the extracted captions (3 words in length) shown in Tab 5. We observe that, although the extraction success rate decreases compared to using full captions, it remains significantly higher than when no caption information is used. FineXtract, when applied to extracted captions, achieves a notably higher extraction success rate compared to the baseline. This confirms the effectiveness of both the proposed caption extraction approach and FineXtract.
| | AS | A-ESR($\tau$=0.6) |
|:-------------------------------:|:------:|:-----------------:|
| FineXtract (Full Caption) | 0.501 | 0.35 |
| FineXtract (Extracted Caption) | 0.314 | 0.15 |
| FineXtract (Empty Caption) | 0.192 | 0.00 |
| CFG (Full Caption) | 0.434 | 0.23 |
| CFG (Extracted Caption) | 0.308 | 0.08 |
| Direct Text2img (Empty Caption) | 0.146 | 0.00 |
[1]Carlini N, Hayes J, Nasr M, et al. Extracting training data from diffusion models[C]//32nd USENIX Security Symposium (USENIX Security 23). 2023: 5253-5270.
[2]Somepalli G, Singla V, Goldblum M, et al. Diffusion art or digital forgery? investigating data replication in diffusion models[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023: 6048-6058.
[3]Somepalli G, Singla V, Goldblum M, et al. Understanding and mitigating copying in diffusion models[J]. Advances in Neural Information Processing Systems, 2023, 36: 47783-47803.
2. For Questions2:
> Could the authors provide further discussion on the underlying principles or insights for identifying or setting the grid search range?
> Furthermore, when the fine-tuning dataset and iteration count are fixed, does a larger optimal suggest that the model or fine-tuning approach is more prone to overfitting or memorization of the training data?
Intuitively, with longer training iterations, the distribution learned by the fine-tuned model should better align with the fine-tuned data distribution. In other words, $ p_{\theta'}(x) $ should approach $ q(x) $ and become more distant from $ p_{\theta'}(x) $ in Eq. (3). Therefore, the optimal $ \lambda $ should increase (closer to 1), and the optimal $ w = \frac{1}{\lambda} $ should decrease with longer training iterations. Similiary, for conditional diffusion model in Eq. (7), the optimal $\lambda'$ should also increase with the optimal $ w' = \frac{1}{\lambda'} $ decreases with longer training iterations.
We update the results using four classes of images from 4 classes of WikiArt under DreamBooth and perform a grid search to find the optimal hyperparameter $ w' $ under different training iterations. We set the fixed term $ k = 0 $ for simplicity, focusing only on $ w' $. The experiment results are available via an anonymous link: [Grid search for the best w' for different classes](https://drive.google.com/file/d/1HGMN1jv4x3jrMP21QgEa2Z9aYnB-4oSj/view?usp=sharing).
Our findings show that, for our method, the optimal $ w' $ tends to decrease (the optimal $\lambda'=\frac{1}{w'}$ tends to increase) as training iterations increase in most cases. However, due to the inherent randomness in the clustering process, this trend is not consistent across all scenarios.
We also observe that when the AS is higher, indicating that the model has memorized more, the optimal $w' $ tends to be smaller ( the optimal $\lambda'=\frac{1}{w'}$ tends be larger) . The experiment results are available via an anonymous link: [Experiment result for AS at best $w'$](https://drive.google.com/file/d/1rMAXcjBODxSZ7U523_SOeYC7X0RJcIEN/view?usp=sharing).
In other words, a larger $\lambda'$ likely suggests that the model, or the fine-tuning approach, is more prone to overfitting or memorization of the training data, as noted by the reviewer.
We will include more extensive research on this aspect in the revised paper.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response. Most of my concerns have been satisfactorily addressed.
The only remaining issue pertains to the performance degradation when captions are extracted, which may limit the applicability of FineXtract in broader real-world scenarios. Nonetheless, I believe the overall contribution of this work outweighs this limitation. Therefore, I will maintain my original rating (weak accept).
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive comments and helpful feedback!
> *“The only remaining issue pertains to the performance degradation when captions are extracted, which may limit the applicability of FineXtract in broader real-world scenarios.”*
We still want to clarify that our main setting assumes captions are directly accessible, which is often the case in practice—particularly for real checkpoints on platforms like Civitai. In DreamBooth, for example, special tokens such as “a sks dog” are typically included to ensure proper model usage. This assumption is also commonly adopted in prior work.
For scenarios where captions are not directly available, we propose an alternative approach to extract partial information. While we do not claim to recover full captions without degradation, to the best of our knowledge, this is the first attempt to explore caption extraction. Our results show that even partial information can increase extraction performance, and we hope this will inspire further research in this direction.
We greatly appreciate your time and efforts spent in reviewing our work! | Summary: This paper introduces a novel technique for extracting fine-tuning data from personalized diffusion models, distinguishing it from prior work on data extraction in standard diffusion models. The additional constraints imposed by the fine-tuning phase have real-world implications, enabling more effective data extraction by leveraging the knowledge that the current model is a fine-tuned version of a base model.
The key insight is that the fine-tuned model's score prediction can be expressed as a weighted combination of the base model's score and the score of the fine-tuning dataset distribution. This formulation allows for an analytical conversion to isolate the score of the fine-tuning distribution, which can then be used for sampling and subsequent clustering to recover training data.
The proposed method is evaluated on a customized generation benchmark, demonstrating improved performance by leveraging this new perspective.
Claims And Evidence: Yes, the claims are well validated.
Methods And Evaluation Criteria: The overall approach is well-founded. I appreciate the novel problem setup and the insightful method developed around it. The evaluation is robust and effectively supports the proposed technique.
Theoretical Claims: Yes, I checked the equations for deriving the score for the distribution of the finetuned dataset
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes, all.
Relation To Broader Scientific Literature: This paper presents a novel approach to improving training data extraction from pre-trained diffusion models by introducing a tailored guidance term that isolates the influence of fine-tuning. Given the widespread availability of fine-tuned models online, this technique has good practical relevance.
Essential References Not Discussed: no.
Other Strengths And Weaknesses: The overall illustration is clear. And the method makes intuitively sense and is easy to derive and implement.
Other Comments Or Suggestions: not applicable.
Questions For Authors: not applicable.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We greatly appreciate your positive comments! | Summary: This paper introduces a method to extract training data from personalized diffusion models (DMs). The method approximates the fine-tuned model's distribution as an interpolation between the pretrained model's distribution and the fine-tuning data distribution. By extrapolating the score functions of these models, the generation can be guided toward regions with high probability in the fine-tuning data distribution. Subsequently, clustering is used to identify the most likely matches to the original training data.
## update after rebuttal
I am satisfied with the rebuttal, adding experiments with FLUX.1 that show consistent improvements over the baseline method and showing how the extraction rates are correlated with training iterations, which shows the efficacy of the method follows model memorization.
I am convinced to raise my score to Accept.
Claims And Evidence: - The primary claim that fine-tuned diffusion models leak information about their training data is well-supported by the experimental results, and the method successfully extracts training images from various models
- The theoretical claim about approximating the fine-tuned model's distribution as an interpolation between the pretrained model and the training data distribution is mathematically consistent and tested with effective practical results.
- The performance claims are supported by experiments across different models, fine-tuning methods, and datasets, showing consistent improvements over baselines. Effectiveness is also demonstrated on real-world checkpoints from HuggingFace.
Methods And Evaluation Criteria: - The proposed method of model guidance through extrapolation is well-motivated and appropriate for the task, leveraging the mathematical relationship between score matching and diffusion models.
- The evaluation metrics are appropriate and refer to previous work in the field.
- The choice of datasets for style and object learning is appropriate and common for the use of personalized diffusion models.
Theoretical Claims: I have read through Section 4 and the derivations appear sound. However, I haven't checked the correctness of them.
Experimental Designs Or Analyses: - The ablation studies on guidance scale and correction term provide valuable insights into the sensitivity of the method, while experiments on real-world checkpoints are particularly valuable.
- The defense experiments are also interesting, showing that they can reduce extraction success at the cost of generation quality, highlighting practical tradeoffs.
- Experiments on model architectures only account for convolution-based diffusion. Current state-of-the-art generative models, like SD3, Flux or Sana, feature a transformer-based architecture. The paper would benefit from extending the evaluations to these.
Supplementary Material: Yes, I reviewed Appendices B-K, especially for comparisons and visualizations.
Relation To Broader Scientific Literature: This work extends previous research on memorization and data extraction in diffusion models, focusing on personalized models. It also contributes to the area of privacy and copyright in GenAI by providing concrete evidence of data leakage risks.
Essential References Not Discussed: No missing essential related works to my knowledge.
Other Strengths And Weaknesses: ### Strengths
- The formulation of fine-tuned model distribution as an interpolation between the base model and training data distributions provides an elegant theoretical framework that could be applied to other problems beyond data extraction.
- The work is original in its application of score matching and guidance to the problem of extracting private training data.
### Weaknesses
- The extraction success rate is significantly better than the baselines, but still nearly 20% of the total, which seems limited.
- The computational requirements of running two models simultaneously are higher than the baseline (more GPU memory), which may limit practical applicability on consumer-grade equipment.
Other Comments Or Suggestions: Additional visualizations showing the distribution shifts between pretrained and fine-tuned models could help readers better understand the theoretical foundations.
Questions For Authors: Given that the extraction success rate is around 20% in most cases, what factors do you believe limit the extraction of the remaining training data? Would further improvements to the guidance or clustering components potentially increase this rate, or are there fundamental limitations to how much information can be extracted?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your insightful feedback!
1. About Experiment Design and Analysis:
> Experiments on model architectures only account for convolution-based diffusion. Current state-of-the-art generative models, like SD3, Flux or Sana, feature a transformer-based architecture. The paper would benefit from extending the evaluations to these.
We update new results using FLUX.1 [dev]. We experiment on Lora scenario with the official scripts provided by diffusers. The training iterations are fixed to 150$N_0$ as suggested by the repository and other hyper-parameters keep the same as the one in the repository. We compare our method with baseline (CFG), both methods show best performance when the guidance strength $w’$ is 3.0, where we can see the improvement is consistent. We will add these results in the revised paper.
| | AS | A-ESR($\tau$=0.7) | A-ESR($\tau$=0.6) |
|:--------------:|:-----:|:-----------------:|:----------------:|
| CFG+Clustering | 0.407 | 0.03 | 0.20 |
| FineXtract | 0.496 | 0.30 | 0.43 |
2. About Weakness1 and Question:
> The extraction success rate is significantly better than the baselines, but still nearly 20% of the total, which seems limited.
> Given that the extraction success rate is around 20% in most cases, what factors do you believe limit the extraction of the remaining training data? Would further improvements to the guidance or clustering components potentially increase this rate, or are there fundamental limitations to how much information can be extracted?
We present updated results demonstrating how the extraction success rate increases with the number of training iterations across four classes of the WikiArt dataset, using DreamBooth as the fine-tuning method on SD v1.4.
| Training Iterations | AS | A-ESR($\tau$=0.6) |
|----------------------------|--------|-------------------|
| 100$N_0$ | 0.350 | 0.03 |
| 150$N_0$ | 0.360 | 0.05 |
| 200$N_0$ (commonly used setting) | 0.501 | 0.35 |
| 300$N_0$ | 0.564 | 0.58 |
| 400$N_0$ | 0.594 | 0.68 |
These results suggest that the extraction performance is primarily dependent on the extent of the information the model memorizes. Our current result, above 20% at most cases, is based on the default training configuration and the current checkpoint. We believe this limitation is largely due to the model's inherent memorization capacity.
Potential improvements may arise from leveraging additional information within the model. Our current approach relies solely on the output of predicted noise, utilizing the model in an end-to-end manner. Future work could focus on further analysis, such as measuring attention responses to different inputs or optimizing input noise, which might enhance the method.
3. About Weakness2:
> The computational requirements of running two models simultaneously are higher than the baseline (more GPU memory), which may limit practical applicability on consumer-grade equipment.
Even though we need to load two models onto the GPU simultaneously, this overhead is minimized because typically only a specific component is fine-tuned. For instance, in DreamBooth, the UNet is typically fine-tuned, while the text encoder remains unchanged. In LoRA, only the LoRA component is fine-tuned. Therefore, when using our method in such scenarios, the additional GPU memory usage is limited to the fine-tuned module. Moreover, FineXtract is an inference-only method and does not require the memory costs from gradient backpropagation. Therefore, loading two models does not limit practical applicability a lot on consumer-grade equipment in practice.
4. Other comments or suggestions:
> Additional visualizations showing the distribution shifts between pretrained and fine-tuned models could help readers better understand the theoretical foundations.
Thank you for your valuable advice. We will revise the paper accordingly.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. Particularly, adding experiments with FLUX.1 that show consistent improvements over the baseline method and showing how the extraction rates are correlated with training iterations, which shows the efficacy of the method follows model memorization.
I am convinced to raise my score to Accept.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for acknowledging our work! We're encouraged that most concerns have been addressed and truly appreciate the time and effort spent on the review. | null | null | null | null | null | null |
Multiobjective distribution matching | Accept (poster) | Summary: The paper tries to figure out how to do distribution matching to generate a distribution that aligns with multiple underlying distributions, often with conflicting objectives, known as a Pareto optimal distribution. The paper develops a theory on information geometry to construct the Pareto set. This allows for explicit derivation of the Pareto set and front for multivariate normal distributions. This leads to algorithms like multiobjective variational autoencoders (MOVAEs) to generate interpolated data distributions that can be used in multiple application fields. Based on the theory, the paper proposes the multiobjective generative adversarial network (MOGAN) algorithm, which is shown to be able to interpolate high quality real world images across domains.
Claims And Evidence: **Claim #1**: "A related but less explored challenge is generating a distribution that aligns with multiple underlying distributions, often with conflicting objectives, known as a Pareto optimal distribution."
- This is supported by convincing evidence. The issue that using one distribution to align multiple distributions has really applications in machine learning.
**Claim #2**: "We figure out the difficulty of multiobjective distribution arises from the constrained parameter space and the complex geodesics formulation."
- This is supported by convincing evidence. The paper provides formal theory for optimization and calculate the Pareto set on this while admitting that there are special cases (the exponential family and multivariate normal distribution). After carefully checked the math, I confirm that it's mathematically sound.
**Claim #3**: "MOVAE, which employs a non-linear decoder to map MVN distributions to real-world distributions, and MOGAN, which learns preference-conditioned generative models."
- This is supported by convincing evidence. The aforementioned theory corroborates the (theoretical) efficacy of the proposed algorithms.
Methods And Evaluation Criteria: Based on my understanding, the evaluation metrics are the visualization of the interpolated data.
Strengths:
+ The visualized generated distributions fit nicely the theoretical outcome/results.
Weaknesses:
- The data this paper is evaluated on is quite simple - 2-D tiny greyscale images that are fairly simple and lacking in details (i.e. the tiny images of a microwave just consist of few boxes). It's unclear how this would work or scale to harder images such as CIFAR-10/100.
- The paper motivates the theoretical and algorithmic contributions by claiming that disturbing matching can be used for fields such as domain adaptation, yet the paper didn't use any domain adaption dataset (DomainNet, for example), for interpolation.
Theoretical Claims: + The paper's strongest part is its theoretical contribution. Unless I missed anything, the proof looks right.
Experimental Designs Or Analyses: See Methods And Evaluation Criteria. My main complaint with the evaluation is that the data they used are quite simplistic. It's hard to tell if such a complex algorithm will scale to a more complex dataset.
Supplementary Material: Yes. (The appendix)
I read the entirety of the supplementary materials. The proofs look correct and the algorithmic description is correct and complete.
Relation To Broader Scientific Literature: This paper could potentially have a very significant impact on broader scientific literature because distribution match is very important in machine learning. Areas such as domain adaptation, generalization, etc. could benefit greatly from it. However, like I have noted in previous comments, there's no evidence that this proposed algorithm will scale to a larger, more difficult website. Esepcially, since the authors are using a variation of the GANs, it might suffer from the issues that the GANs usually face, such as mode collapse.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: - Could you provide some evidence on how the proposed approach fares against harder data? For example, Office-31 or DomainNet.
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Summary: The paper propose multiobjective distribution matching (MODM) using tools from information geometry. two concrete algorithms are introduced: a multiobjective variational autoencoder (MOVAE) and a multiobjective generative adversarial network (MOGAN). Experiments on the QuickDraw dataset are provided to demonstrate that the proposed methods are capable of generating high-quality interpolated image distributions.
Claims And Evidence: The claims are supported by mathematical derivations and by experimental results.
Methods And Evaluation Criteria: The proposed method are suitable for multi-obejctive distribution macthing.
Theoretical Claims: I reviewed the proofs provided for theorems such as Theorem 8 and Theorem 10 but didn't check the details.
Experimental Designs Or Analyses: I reviewed the experimental designs and analyses for all experiments in Section 6. There are some minor issues, which will be discussed in Weaknesses.
Supplementary Material: I reviewed all parts of the supplementary material.
Relation To Broader Scientific Literature: The keep contribution of the paper realted to multi-objective optimization, information geometry, and generative modeling, which are dissused in the paper.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: **Strengths:**
1. The consideration of multi-objective distribution matching is interesting and novel.
2. The proposed method utilizes rigorous mathematical approaches to solve the problem.
3. The experiments demonstrate the effectiveness of the proposed method.
**Weaknesses:**
1. The proposed method is not compared with the original VAE and GAN on real-world datasets.
2. It appears that no experiments have been conducted using the proposed PrefGAN.
3. The discussion on Multiobjective VAE vs. VAE over Mixture Distributions could be more solid if supported by additional literature and experiments.
4. There are some minor errors in the paper, e.g.,
- Page 1: "MOVAE (multiobjective Variational Autoencoder)" should be "Multiobjective Variational Autoencoder (MOVAE)."
- Page 4, line 215: "but letting" should perhaps be "by letting."
Other Comments Or Suggestions: For all experiments in the paper, more experimental details should be included, such as the number of epochs, learning rate, and other hyperparameters.
Questions For Authors: 1. What does Pref-conditioned GAN mean in Figure 3?
2. Why are different preferences used in Figures 3 and 4?
3. How does the runtime of the proposed method compare to the baseline methods?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Summary: This paper studies matching a distribution to multiple target distributions. The authors use information geometry to find Pareto optimal solutions, particularly for the exponential family. They apply this to multivariate normal distributions and a MOVAE. They also propose a multi objective GAN. Experiments demonstrate the performance of MOVAE and MOGAN.
Claims And Evidence: Yes
Methods And Evaluation Criteria: The evaluation criteria is limited. Please see Weaknesses for more details.
Theoretical Claims: The theoretical claims appear sound.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Supplementary material was checked.
Relation To Broader Scientific Literature: The paper extends core ideas from information geometry and multiobjective optimization (e.g., Pareto set learning and MGDA) to generative modeling and distribution matching.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. Considers an important problem that has not been explored enough.
2. Provides novel theoretical framework based on information geometry.
Weaknesses and Questions:
1. One of my major concerns is regarding the limited experimental verification. Only one (arguably toy) image dataset is considered. It is not clear if the method performs well for more complex real-world datasets, such as natural images.
2. The considered experiments do not show the usefulness of the proposed method. Some real use case needs to be included to show the significance of the considered problem and efficacy of the proposed solution.
3. The experimental section lacks quantitative evaluations and baselines. For example, a possible baseline could be using aggregate function to convert multiobjective into single objective optimization.
4. The theoretical analysis is restricted to special families of distributions (mainly exponential families like multivariate normals), which limits the generality. Some discussions on this is needed.
Other Comments Or Suggestions: No other comments.
Questions For Authors: Please refer to Weaknesses and Questions
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thanks for your valuable comments and hope that our following response can address some of your concerns.
**W4 The theoretical analysis is restricted to special families of distributions (mainly exponential families like multivariate normals), which limits the generality. Some discussions on this is needed.**
Actually, exponential family already covers many statistical models, e.g., normal, exponential, log-normal, gamma, chi-squared, beta, Dirichlet, Bernoulli, Poisson, geometric $\ldots$ (see Wikipedia artical “Exponential family”). Meanwhile, our discussion works on dually flat manifolds, which cover most commonly used statistical models including not only exponential families, but also mixture families (convex combinations of distributions), and more generally, $\alpha$-affine manifolds and $\alpha$-families (see Section 3.6 in "Methods of Information Geometry'' by S. Amari and H. Nagaoka). We will complement more concrete examples of exponential families and mixture families in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. Some of my the concerns I raised have not been addressed. Therefore, I keep the score unchanged. I encourage the authors to incorporate the reviewers' comments to increase the quality of the manuscript in future versions. | Summary: The paper develops a theoretical framework for multiobjective distribution matching using information geometry, deriving explicit forms for the Pareto set and Pareto front for exponential family distributions. It applies these insights to design multiobjective generative models.
Claims And Evidence: The derivation of the Pareto set is well supported by theory, but the paper lacks comparisons or ablation studies with conventional methods. Overall, the experimental section is limited and does not provide enough evidence that the proposed approach is more effective than simpler alternatives. For example, a baseline comparison using a vanilla VAE optimized with MSGD would provide a useful reference for readers.
Methods And Evaluation Criteria: The paper primarily demonstrates its concepts through theoretical derivations and illustrative experiments (e.g., with multivariate normal distributions and image interpolation). While this approach makes sense for exploring tradeoffs and Pareto optimality, it can be challenging to directly compare these results without quantitative analysis.
Theoretical Claims: To the best of my knowledge the theoretical claims and proofs in the appendix seems to be correct and extensively derived.
Experimental Designs Or Analyses: There is a lack of comprehensive ablation studies, error bars, or detailed comparisons with alternative multiobjective optimization approaches.
Supplementary Material: I have went through the full supplementary material.
Relation To Broader Scientific Literature: This theoretical contribution builds on and complements prior work in multiobjective optimization and generative modeling.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strength
- Paper is well written and the theoretical claims and derivations are extensive.
- Extends Pareto optimality in the setting of generative models.
Weakness
- Limited experimental validation, specifically a lack of a good baseline model. Hard to judge practical benefit.
- Rely on strong assumptions that may not work in practice.
Other Comments Or Suggestions: N/A
Questions For Authors: - Would relaxing the dually flat manifold assumption significantly affect the applicability of the method in practice?
- How scalable are both MOVAE and MOGAN with respect to the dimension of the dataset?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thanks for all valuable comments from you. We thanks for that your believe our theoretical results is strong and hope that our following response can address your concerns more or less.
----
**W1. Limited experimental validation.**
To response with that, we have included a new dataset called ageing of real world images.
**W2. Rely on strong assumption that may not work in practice**
We address this issue in two parts. First, we derive the explicit Pareto set formulation for the dually-flat manifold, encompassing the Exponential and mixture families. This applies to a wide range of distributions, including MVN, Poisson, Gamma, Wishart, Beta, and Hypergeometric. Second, we extend our approach using nonlinear models like VAE and GAN to transform MVN into complex real-world distributions, further broadening its applicability.
**Q1. Would relaxing the dually flat manifold assumption significantly affect the applicability of the method in practice?**
In fact, most of the commonly used statistical models can be covered by the case of dually flat manifold, e.g., MVN model, Possion model, Gamma model, probability simplex ... If more generally, a non dually flat statistical models is considered, similar analysis can still be applied to its canonical divergence function, but there will be an additional error term of 4th order in the result (see Section 3.8 in "Methods of Information Geometry'' by S. Amari and H. Nagaoka).
**Q2. How scalable are both MOVAE and MOGAN with respect to the dimension of the dataset?**
We use MOOVAE as an illustrative example. In MOOVAE, the decoder network generates the output image from a latent vector $ z $, which typically has a low-dimensional representation. The Pareto-optimal distribution is constructed within this latent space and subsequently decoded into an image. By keeping the latent vector relatively small, the model maintains efficiency. The scalability for generating larger images primarily depends on the capacity and power of the neural network.
To empirically show the scalability power, we also have added a new experiments on the ageing dataset, where the images are realworld $3 \times 512 \times 512$ images.
---
Reference
[1] https://github.com/royorel/FFHQ-Aging-Dataset. | null | null | null | null | null | null | ||
Reliable and Efficient Amortized Model-based Evaluation | Accept (poster) | Summary: This Paper proposes a new approach to evaluate LLM performance via IRT and to provide item generation with pre-chosen difficulty levels. I am not an expert in LLMs but I happened to work on IRT in the past. So my point mainly concern this aspect.
I am very short on time for ICML reviews. Apologies for my reviews being a bit short.
Claims And Evidence: The authors claims that their procedure simplifies evaluation of LLM-based models and even enables to generate new test items on the fly. As far as I understand it, the experiments support both claims.
Methods And Evaluation Criteria: I am not an expert in LLM evaluation but the general approach and metrics looked sensible to me.
Theoretical Claims: None
Experimental Designs Or Analyses: I am not an expert in LLM evaluation but the general approach and metrics looked sensible to me.
Supplementary Material: I only skimmed over the supplements.
Relation To Broader Scientific Literature: I am not familiar with the LLM literature. In terms of IRT literature, the fact that only the Rasch model was considered is somewhat problematic for me (see below).
Essential References Not Discussed: see above
Other Strengths And Weaknesses: For the Rasch model, the number of correctly answered item by a person / the number of people correctly answering an item are sufficient statistics for the person / item parameters, respectively. This means that for the Rasch model, EM etc. algorithms are not really necessary since the item and person parameters are essentially analytic. Is there a reason, this simple solution to the Rasch model was not considered?
Why use only the Rasch model in your paper? The Rasch model is often overly restrictive. Typicilly, at least a 2 parameter logistic (2PL) model is appropriate, at least when working with humans.
I struggled understanding what exactly is done in the paper from the abstract. Perhaps the author could increase the context of the paper a bit at the start of the abstract?
Other Comments Or Suggestions: see above
Questions For Authors: see above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer Kyh8,
Thank you for your valuable feedback. We answer your comment below.
**Answer to Other Strengths And Weaknesses 1:** When the difficulty (ability) is known, the sum of the responses is indeed a sufficient statistic for ability (difficulty). During calibration, we generally do not know either difficulty or ability. We can’t estimate them jointly because of the incidental parameter problem, first recognized by Neyman and Scott (Neyman-Scott 1948). Harwell (1988) provides a detailed discussion of this problem in IRT. The marginal likelihood approach is then proposed to avoid this problem by marginalizing out the ability during difficulty estimation. This approach is broadly known in the related literature as the “expectation-maximization” algorithm (Harwell 1988). Below, we explore whether the maxima of this marginal likelihood objective can be obtained analytically. Consider one question with difficulty $z$ and a response vector $Y$ size $N$. The log marginal likelihood function is
$$
\mathcal{L}(z) = \frac{1}{N} \sum_i \log p(Y|z)
$$
$$
= \frac{1}{N} \sum_i \mathbb{E}_{\theta_i} \log p(Y_i|\theta_i, z)
$$
$$
\approx \frac{1}{N M} \sum_{i,k} \log p(Y_i|\theta_{i,k}, z)
$$
$$
= \frac{1}{N M} \sum_{i,k} Y_i \log p_{i,k}(z) + (1- Y_i) \log (1- p_{i,k}(z))
$$
where $p_{i,k}(z) = \frac{1}{1 + e^{-(\theta_{i,k} - z)}} = \sigma(\theta_{i,k}-z)$. Taking derivative with respect to $z$ using chain rule and $\sigma'(x) = \sigma(x) (1-\sigma(x))$:
$$
\frac{d\mathcal{L}}{dz} = \frac{1}{N M} \sum_{i,k} \left[ \frac{Y_i}{p_{i,k}(z)} \frac{d p_{i,k}(z)}{dz} - \frac{1 - Y_i}{1 - p_{i,k}(z)} \frac{d p_{i,k}(z)}{dz} \right]
$$
$$
= \frac{1}{N M} \sum_{i,k} \left( \left[ \frac{Y_i}{p_{i,k}} - \frac{1 - Y_i}{1 - p_{i,k}} \right] \frac{d p_{i,k}}{dz} \right)
$$
$$
= \frac{1}{N M} \sum_{i,k} \left( Y_i - Y_i p_{i,k} - p_{i,k} + Y_i p_{i,k}\right)
$$
$$
= \frac{1}{N M} \sum_{i,k} Y_i - p_{i,k}
$$
Setting the derivative to zero:
$$
\frac{d\mathcal{L}}{dz} = \frac{1}{N M} \sum_{i,k} \left( Y_i - p_{i,k} \right) = 0 \Rightarrow
\sum_{i,k} Y_i = \sum_{i,k} \frac{1}{1 + e^{-(\theta_{i,k} - z)}}
$$
This expression does not admit an analytic solution for $z$ except in degenerative cases because the sum of the logistic function generally does not simplify to a closed-form invertible expression in $z$.
[1] Harwell, Baker, Zwarts. Item Parameter Estimation Via Marginal Maximum Likelihood and an EM Algorithm: A Didactic. Journal of Educational Statistics, 1988, pp. 243-271
[2] Neyman, Scott. Consistent Estimates Based on Partially Consistent Observations. Econometrica, pp. 1-32
**Answer to Other Strengths And Weaknesses 2:** We conducted an ablation study comparing 3 IRT variants—Rasch, 2PL, and 3PL (see Figure 9). The results indicated that neither the 2PL nor the 3PL models outperformed the Rasch model, a finding we attribute to the limited number of test takers in the LLM context. With additional parameters, the more complex models tend to suffer from increased estimation complexity, which in turn raises the risks of overfitting and higher variance. Based on these considerations, we chose the Rasch model.
**Answer to Other Strengths And Weaknesses 3:** We have revised our abstract: Comprehensive evaluations of language models (LM) during both development and deployment phases are necessary because these models possess numerous capabilities (e.g., mathematical reasoning, legal support, or medical diagnostic) as well as safety risks (e.g., racial bias, toxicity, or misinformation). The average score across a wide range of benchmarks provides a signal that helps guide the use of these LMs in practice. Currently, holistic evaluations are costly due to the large volume of benchmark questions, making frequent evaluations impractical. A popular attempt to lower the cost is to compute the average score on a subset of the benchmark. This approach, unfortunately, often renders an unreliable measure of LM performance because the average score is often confounded with the difficulty of the questions in the benchmark subset. Item response theory (IRT) was designed to address this challenge, providing a reliable measurement by careful controlling for question difficulty. Unfortunately, question difficulty is expensive to estimate. Facing this challenge, we train a model that predicts question difficulty from its content, enabling a reliable measurement at a fraction of the cost. In addition, we leverage this difficulty predictor to further improve the evaluation efficiency through training a question generator given a difficulty level. This question generator is essential in adaptive testing, where, instead of using a random subset of the benchmark questions, informative questions are adaptively chosen based on the current estimation of LLM performance. Experiments on 22 common natural language benchmarks and 172 LMs show that this approach is more reliable and efficient compared to current common practice. | Summary: The paper introduces a new way to evaluate large language models (LLMs) using Item Response Theory (IRT), a method from psychology that helps measure abilities and item difficulties separately. Traditional evaluation methods can be expensive and depend too much on the specific test questions chosen, so this paper aims to make the process more reliable as well as efficient. It includes two main innovations: (i) Amortized Calibration and (ii) Conditional Item Generator.
Claims And Evidence: The paper makes several key claims:
1. The IRT-based method is more reliable and efficient than traditional Classical Test Theory (CTT) methods. This is backed by Table 1.
2. The conditional item generator creates effective new questions, backed by Sec 4.4
Methods And Evaluation Criteria: This paper measures different approaches with common metrics like AUC-ROC.
Theoretical Claims: This paper provides certain modifications with theoretical claims. But I am not able to justify whether they are right or wrong.
Experimental Designs Or Analyses: The experimental setup involved 25 NLP datasets (e.g., airbench, mmlu, truthful_qa) and 184 LLMs (e.g., ada, LLaMA, GPT-4). The experiments are solid enough to support the conclusion.
Supplementary Material: Many. I didn't go through a lot.
Relation To Broader Scientific Literature: The work builds on psychometric theory, particularly IRT, with roots in educational testing and extends it to AI evaluation, aligning with recent efforts in scalable and efficient LLM evaluation.
Essential References Not Discussed: Not much. But it would be better to discuss related topics such as LLM performance prediction. The current related work is rather too short.
Other Strengths And Weaknesses: -
Other Comments Or Suggestions: -
Questions For Authors: -
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer 2sSn,
Thank you for your valuable feedback. We answer your comment below.
**Essential References Not Discussed:** Not much. But it would be better to discuss related topics such as LLM performance prediction. The current related work is rather too short.
**Answer:** Thank you for your valuable feedback. We agree that a more detailed discussion on LLM performance prediction would strengthen our related work section. In response, we will incorporate the following content into our updated submission:
Recent research has made significant strides in understanding and predicting LLM performance. For instance, Schaeffer et al. (2023) address performance discontinuities associated with emergent behaviors, while Ganguli et al. (2022a), Owen (2024), and Finnveden (2020) have illustrated how downstream task performance can be systematically predicted. In one study, Hu et al. (2024) established a clear relationship between the amount of training resources and the resulting performance on downstream tasks by iteratively pretraining a model. Moreover, Arora and Goyal (2023) offer insights into forecasting performance by breaking down complex language model capabilities into fundamental skills. Recent work by Ruan et al. (2024) further enhances scaling laws by integrating latent variables that capture underlying patterns across various model families and tasks. These works on predictable model performance are complementary to research in IRT that helps improve the efficiency of model evaluation.
[1] Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo. 2023. Are emergent abilities of large language models a mirage? In Conference on Neural Information Processing Systems.
[2] Deep Ganguli, Danny Hernandez, Liane Lovitt, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova Dassarma, Dawn Drain, Nelson Elhage, Sheer El Showk, Stanislav Fort, Zac Hatfield-Dodds, Tom Henighan, Scott Johnston, Andy Jones, Nicholas Joseph, Jackson Kernian, Shauna Kravec, Ben Mann, Neel Nanda, Kamal Ndousse, Catherine Olsson, Daniela Amodei, Tom Brown, Jared Kaplan, Sam McCandlish, Christopher Olah, Dario Amodei, and Jack Clark. 2022a. Predictability and surprise in large generative models. In Conference on Fairness, Accountability, and Transparency. ACM.
[3] David Owen. 2024. How predictable is language model benchmark performance? In arXiv.
[4] Lukas Finnveden. 2020. Extrapolating gpt-n performance.
[5] Shengding Hu, Xin Liu, Xu Han, Xinrong Zhang, Chaoqun He, Weilin Zhao, Yankai Lin, Ning Ding, Zebin Ou, Guoyang Zeng, Zhiyuan Liu, and Maosong Sun. 2024. Predicting emergent abilities with infinite resolution evaluation. In International Conference on Learning Representations.
[6] Sanjeev Arora and Anirudh Goyal. 2023. A theory for emergence of complex skills in language models. In arXiv.
[7] Yangjun Ruan, Chris J. Maddison, and Tatsunori Hashimoto. 2024. Observational scaling laws and the predictability of language model performance. In arXiv.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. The reviewer has no further comment on this paper except for two points:
1. Some references (Both in the rebuttal and the paper) listed are the ArXiv version, not the proceedings version. Kindly use the proceedings version.
2. The related work on performance prediction (rebuttal) could be further improved for comprehensiveness. It appears the reference mainly comes from ML conferences like ICLR, ICML or NeurIPS. Perhaps keyword searching in NLP conference proceedings like ACL or EMNLP could help? It would be great to recognize contributions of papers from other venues on the related topics.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comment, we have fixed the citation issue in the following related work on LLM performance prediction, and we will fix the reference list of the paper in the final version. We have added more related work from NLP conferences like ACL and EMNLP.
Recent research has significantly advanced our understanding of LLM performance prediction by establishing robust scaling laws and uncovering emergent phenomena. Kaplan et al. (2020), Hoffmann et al. (2022), and Hernandez et al. (2022) laid the groundwork by elucidating how model performance scales with size, data, and compute. Bahri et al. (2024) and Muennighoff et al. (2023) have deepened these insights, while studies such as those by Isik et al. (2024), Ghorbani et al. (2021), Zhuocheng et al. (2023), Caballero et al. (2023), and Henighan et al. (2020) have extended scaling laws to predict downstream task performance. Research on predicting emergent abilities with infinite resolution evaluation (2024) has highlighted the sudden performance gains. Schaeffer et al. (2023) examined discontinuities linked to emergent abilities, while Finnveden (2020) explored methods for extrapolating GPT performance. Ganguli et al. (2022a) and Owen (2024) scrutinized the balance between predictability and surprise in generative models, and Arora and Goyal (2023) broke down complex LLM skills into fundamental components to facilitate granular forecasting. Moreover, studies on emergence phenomena by Suzgun et al. (2022) and Wei et al. (2022) have shed light on the mechanisms behind abrupt performance improvements. Ruan et al. (2024) introduced latent variables that generalize across tasks and model families. Zhang et al. (2024) proposed a collaborative framework that leverages cross-family model-task performance patterns through factor analysis. Finally, to address broader challenges in the field, Anwar et al. (2024) highlighted foundational issues in the alignment and safety of LLMs.
[1] Rylan Schaeffer et al. 2023. *Are emergent abilities of large language models a mirage?* NeurIPS.
[2] Deep Ganguli et al. 2022. *Predictability and surprise in large generative models.* FAccT.
[3] David Owen. 2024. *How predictable is language model benchmark performance?* arXiv.
[4] Lukas Finnveden. 2020. *Extrapolating GPT-n performance.* Online.
[5] 2024. *Predicting emergent abilities with infinite resolution evaluation.* ICLR.
[6] Sanjeev Arora et al. 2023. *A theory for emergence of complex skills in language models.* arXiv.
[7] Yangjun Ruan et al. 2024. *Observational scaling laws and the predictability of language model performance.* NeurIPS.
[8] Jared Kaplan et al. 2020. *Scaling laws for neural language models.* arXiv.
[9] Jordan Hoffmann et al. 2022. *An empirical analysis of compute-optimal large language model training.* NeurIPS.
[10] Danny Hernandez et al. 2022. *Scaling laws and interpretability of learning from repeated data.* arXiv.
[11] Yasaman Bahri et al. 2024. *Explaining neural scaling laws.* PNAS.
[12] Niklas Muennighoff et al. 2023. *Scaling data-constrained language models.* NeurIPS.
[13] Berivan Isik et al. 2024. *Scaling laws for downstream task performance of large language models.* ICLR.
[14] Behrooz Ghorbani et al. 2021. *Scaling laws for neural machine translation.* ICLR.
[15] Zhang Zhuocheng et al. 2023. *Scaling law for document neural machine translation.* Findings of EMNLP.
[16] Ethan Caballero et al. 2023. *Broken neural scaling laws.* ICLR.
[17] Tom Henighan et al. 2020. *Scaling laws for autoregressive generative modeling.* arXiv.
[18] Usman Anwar et al. 2024. *Foundational challenges in assuring alignment and safety of large language models.* arXiv.
[19] Mirac Suzgun et al. 2022. *Challenging BIG-Bench tasks and whether chain-of-thought can solve them.* ACL (Findings).
[20] Jason Wei et al. 2022. *Emergent abilities of large language models.* TMLR.
[21] Qiyuan Zhang et al. 2024. *Collaborative performance prediction for large language models.* EMNLP. | Summary: This paper proposes a novel amortized model-based approach based on Item Response Theory to tackle the problem of the dependence of evaluation procedures on test subset selection and the high cost of running extensive evaluations. Through extensive experiments, the authors show a reduced query complexity while maintaining reliability and better generalization to unseen test subsets.
Claims And Evidence: Claims are properly supported by evidence and design choices are ablated over.
Methods And Evaluation Criteria: The considered methods and evaluation criteria are reasonable/standard and the evaluation setting is extensive.
Theoretical Claims: I checked the theoretical claims overall, no issues to be discussed.
Experimental Designs Or Analyses: The experimental design is extensive and sound overall and ablations especially on the use IRT models are very useful for readers not familiar with IRT.
Supplementary Material: Yes, mostly implementation details and checking text outputs for soundness in general.
Relation To Broader Scientific Literature: Getting the most out of evaluations is one of the most crucial problems for properly assessing progress on capabilities and model safety. The authors reduce the cost of evaluations via amortization, while benefits of that can help with generalization to unseen test settings and adaptively tailoring evaluations to the model.
Essential References Not Discussed: None to the best of my knowledge.
Other Strengths And Weaknesses: - Figures, tables etc aren't properly referenced anywhere throughout the paper, e.g “Figure 2” in line 315 column 2 or “Table 2” in line 809 in Appendix.
Other Comments Or Suggestions: - Training details and hyperparameters for the conditional item generator might be better shown as a table in Appendix E.2 + typo in the title for the same section: “Trainig” -> “Training”.
Questions For Authors: - How do you make sure that by adaptive testing, the evaluation procedure doesn't overfit some given model?
- Do you have any intuitions about your approach generalizing to a setting where you're training an evaluation model on multiple LLMs (example previous version of some LLM) and using the evaluation model on other LLMs (e.g new iterations of the same LLMs)? A particular example could be the same model's base and instruction-tuned versions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer ztqt,
Thank you for your valuable feedback. We answer your comment below.
**Other Strengths And Weaknesses:** Figures, tables etc aren't properly referenced anywhere throughout the paper, e.g “Figure 2” in line 315 column 2 or “Table 2” in line 809 in Appendix.
**Answer:** Thank you for your careful review. We appreciate you pointing out the issues with the referencing of figures and tables. We will correct these references in the latest submission.
**Other Comments Or Suggestions:** Training details and hyperparameters for the conditional item generator might be better shown as a table in Appendix E.2 + typo in the title for the same section: “Trainig” -> “Training”
**Answer:** For the hyperparameters of the conditional item generator, we consistently used a temperature of 0.6, a top_p of 0.9, and a max_tokens of 256. We will update the paper to include these information as you suggest. Additionally, we appreciate you pointing out the typo—we will correct it.
**Questions For Authors 1:** How do you make sure that by adaptive testing, the evaluation procedure doesn't overfit some given model?
**Answer 1:** Thank you for your thoughtful question. Adaptive testing builds on the estimated question difficulty during the calibration phase. We calibrate each question’s difficulty using responses from 172 diverse LLMs, ensuring that the difficulty estimation reflects a broad range of capabilities rather than the trait of any single LLM. This calibration phase identifies which questions are challenging and which are comparatively easier, providing a robust reference. By basing subsequent adaptive testing on these derived difficulty estimates, we ensure that the evaluation process does not overfit a specific LLM.
**Questions For Authors 2:** Do you have any intuitions about your approach generalizing to a setting where you're training an evaluation model on multiple LLMs (example previous version of some LLM) and using the evaluation model on other LLMs (e.g new iterations of the same LLMs)? A particular example could be the same model's base and instruction-tuned versions.
**Answer 2:** Thank you for your question. We calibrate the difficulty of each question by analyzing responses from 172 diverse LLMs—including base versions, instruction-tuned versions, and RLHF versions. This pool spans a wide range of sizes (for example, Llama 3 in 8B, 70B, and 405B, both base and instruct), ensuring that our evaluation framework remains robust and reliable across different training iterations and model sizes, and is applicable to other LLMs. Additionally, we recommend periodically recalibrating the difficulty levels to ensure our evaluation framework remains comprehensive and current with the latest LLMs. | Summary: This work proposed a novel way of revisiting large-scale LLM evaluation from IRT perspective. The novel contribution comes from different perspectives: (1) using LLMs to estimate the difficulty of evaluation examples (items), (2) LLM based item generator that can generate a synthetic evaluation example based on the difficulty needed. Using real world benchmarks and pretrained models, they showed how the proposed framework outperformed a strong baseline (model-free classical test theory).
Claims And Evidence: - This work has very strong motivation: number of LLM benchmarks is constantly growing together with the number of models we need to evaluate between each other.
- IRT has a very strong background literature and successful application, but not yet become a standard in LLM evals. This work steps into this direction.
Methods And Evaluation Criteria: The proposed method is aimed at evaluation of LLMs, and evaluation of this method involves comparison with other frameworks such as CTT. The implemented testbed and selection of LLM models and benchmarks follows the current standard and will be valuable for community.
Theoretical Claims: I did not check correctness of proofs given the soundness of empirical experimental results.
Experimental Designs Or Analyses: I have checked the experimental testbed, and overall they look good to me. Authors provided very thorough description of all experiments and model training for amortized calibration in the appendix.
Supplementary Material: I did not review supplementary material apart from the appendix in the main PDF.
Relation To Broader Scientific Literature: This work can be very relevant to LLM evaluations community that struggle with the amount of compute and resources needed to keep up with the amount of benchmarks that are required for proper comparisons.
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: n/a
Other Comments Or Suggestions: n/a
Questions For Authors: Could you please add details about the decoding / sampling parameters that were used (1) in PPO training, (2) during data generation with the trained models? Looks like they were not mentioned in the text.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer ydtz,
Thank you for your valuable feedback. For both PPO training and data generation, we used a temperature of 0.6, top_p of 0.9, and a max_tokens of 256. We have added this information to the updated submission. We appreciate your attention to detail and hope this clarifies our experimental setup.
Kind regards,
Authors | null | null | null | null | null | null |
Dendritic Localized Learning: Toward Biologically Plausible Algorithm | Accept (poster) | Summary: This work proposes a biologically plausible algorithm for training deep neural networks utilizing apical dendrites. The authors apply the proposed algorithms to learning in MLPs, CNNs, and RNNs, and demonstrate that the proposed algorithm outperforms previous biologically plausible algorithms that satisfy all three plausibility criteria set up by the authors.
Claims And Evidence: The manuscript is technically sound except for the issues on evaluation and novelty discussed below.
Methods And Evaluation Criteria: The implementation of the previous algorithms appears to be suboptimal. For instance, in table 1, the performance of feedback alignment in MNIST with MLPs is 91.87%. However, previous work has shown that this algorithm can achieve 98% test performance (Bartunov et al., NeurIPS 2018).
Additionally, I found the second criterion for the biological plausibility to be somewhat problematic considering the ubiquitous presence of neuro-modulatory signals in the brain that deliver global error signals to neurons.
Theoretical Claims: Yes, I checked.
In the second line of Eq. 15, the derivatives with respect to i+2, …, n are omitted. Why is that the case?
Experimental Designs Or Analyses: Implementation details of previous algorithms are missing.
Supplementary Material: Yes, but not very carefully.
Relation To Broader Scientific Literature: I’m not convinced of the novelty of the manuscript. The idea of using apical dendrites for biologically-plausible credit assignments has been discussed previously (J Guerguiev et al., eLife, 2017; J Sacramento et al., NeurIPS 2018; A Payeur et al., Nat Neurosci 2021, …). This work does not appear to offer improvement either in terms of biological plausibility or performance. It is also disappointing that the authors did not discuss any of these works in the related work section.
If $\Theta$ and $W$ are initialized to be the same and $\xi_i$ in Eq. 9 is replaced with $u_i$, the proposed rule becomes equivalent to backprop. Although I am not sure if this particular approximation was implemented before, I’m not surprised by its decent performance, considering previous work explored similar approximations.
Essential References Not Discussed: Please see the comment above.
Other Strengths And Weaknesses: A thorough comparison with previously proposed algorithms presented in Table 1 is potentially beneficial for the field, though the authors should make sure that all algorithms are evaluated in a fair condition.
Other Comments Or Suggestions: L088: 'Pyramidal neurons consist of 70-85% of neurons': I believe this statement is only true for the cortex. The granule cell is presumably the most numerous neuron type in the mammalian brain.
Questions For Authors: Please see the comments above.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: ### 1. Feedback Alignment with MLP on MNIST can achieve 98% test performance.
As stated in Line 325 (left column), to ensure fairness, we adopted the same architecture (784-1024-512-256-10 FC layers, see Appendix C.2) for all algorithms on MNIST.
The architecture used in the paper you mentioned consists of a 256-256-256-256-256-10 FC structure.
Differences in network architecture can lead to performance discrepancies, and deeper networks typically achieve higher accuracy.
For fair comparison, we avoided additional training technologies (like batch normalization, residual connection) as much as possible because potential biases introduced by these additional techniques may have varying effects across different algorithms.
Therefore, it is not surprising that our reproduced test performance is slightly lower than theirs.
### 2. The second criterion is problematic considering the neuro-modulatory signals in the brain that deliver global error signals to neurons.
Thanks for your suggestion.
We are aware of the presence of global signals in the brain, which play an important role in modulating certain mechanisms across neurons.
However, there is currently no clear biological evidence that these signals represent errors corresponding to the difference between expected and actual outputs, nor that they precisely capture the direction and magnitude required for updating each neuron.
We agree that further clarification of this criterion is necessary and will add this discussion into Section 2.1.
### 3. In the second line of Eq. 15, the derivatives with respect to i+2, …, n are omitted
Thank you for pointing this out. There should indeed be an ellipsis when expanding $\mathcal{L}$ in the second line of Eq.15. The rest of Eq. 15 remains correct.
### 4. Implementation details of previous algorithms are missing.
As stated in Line 325 (left column), to ensure fairness, we use the same model architecture across all learning algorithms for MLPs, CNNs, and RNNs under a certain dataset.
Detailed model specifications and training configurations can be found in Appendix C.2.
### 5. The authors did not discuss works using apical dendrites for credit assignments in the related work section.
Thanks for your suggestion.
While apical dendrites have been discussed in previous literature for credit assignment and have been utilized for various purposes, our study takes advantage of their properties to design and implement more biologically plausible learning algorithms, which differ significantly from existing approaches.
J Guerguiev et al. (eLife, 2017) primarily aimed to explain how deep learning can be achieved using segregated dendritic compartments, but it did not propose a specific learning algorithm.
As for Sacramento et al. (NeurIPS 2018), we have acknowledged in Line 200 (right column) that we followed their division strategy for pyramidal neurons.
A Payeur et al. (Nat Neurosci 2021) investigated burst-dependent synaptic plasticity.
While their work shares conceptual similarities with ours in terms of apical dendritic processing, the primary objective of their study differs from ours.
Our proposed method not only satisfies all three criteria but also achieves higher performance compared to existing biologically plausible learning methods.
That said, we will add the discussion in the related work section. Thank you!
### 6. If $\Theta$ and $\mathbf{W}$ are initialized to be the same and $ξ_i$ in Eq.9 is replaced with $u_i$, the proposed rule becomes equivalent to backprop. I am not sure if this particular approximation was implemented before.
First, even if $\Theta^T$ and $\mathbf{W}$ are initialized identically, their update directions will differ because $\xi_i = u_i - x_i$ and $x_i$ from $\text{layer}_{i+1}$ cannot be exactly $2u_i$ or $0$ since it comes from lable, so $\xi_i$ and $u_i$ are different in both magnitude and direction. As a result, $\Theta^T$ and $\mathbf{W}$ will update in different directions according to Eq.8 and Eq.9. Furthermore, our formula is derived to minimize the loss, leading to the update formula for $\Theta^T$ in Eq.9. Therefore, replacing $\xi_i$ in Eq.9 with $u_i$ would result in an incorrect update direction. In conclusion, our method is not equivalent to BP.
Second, to the best of our knowledge, there are no existing algorithms that simultaneously satisfy all three criteria while also achieving competitive performance with BP.
If you could kindly provide the relevant references, we would be pleased to discuss them in our related work section. Thank you!
### 7. L088: 'Pyramidal neurons consist of 70-85% of neurons': This statement is only true for the cortex.
Thank you for pointing it out. We will clarify this in our revised manuscript.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their replies. While I appreciate their effort, particularly their meticulous comparisons with some of the previously proposed algorithms, I still believe this work does not offer significant improvement over existing literature either in terms of biological plausibility or performance.
The three criteria the authors introduced are neither necessary nor sufficient. While their algorithm is motivated by apical dendrite, it was merely introduced as a computational unit, failing to provide any new biological insights over existing literature on the functional role of apical dendrite for credit assignment (e.g., J Guerguiev et al., eLife, 2017; J Sacramento et al., NeurIPS 2018; A Payeur et al., Nat Neurosci 2021).
In terms of performance, there are a series of local learning algorithms that outperform the proposed algorithm. BurstProp (Payeur et al., Nat Neurosci, 2021), SoftHebb (Journe et al., ICLR 2023), and Counter-current learning (Kao & Hariharan, NeurIPS, 2024) achieve 79.9%, 80.3%, and 82.94% on CIFAR-10, respectively, compared to 70.89% performance presented here. Importantly, these works, especially the first two, capture the biological constraints better than the presented work.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable reply!
## 1. The three criteria the authors introduced are neither necessary nor sufficient. While their algorithm is motivated by apical dendrite, it was merely introduced as a computational unit, failing to provide any new biological insights over existing literature on the functional role of the apical dendrite for credit assignment.
The primary objective of this study is not to introduce new biological insights into the functional role of the apical dendrite but rather to demonstrate that a more biologically plausible learning algorithm can be achieved by leveraging the properties of pyramidal neurons, particularly the distinction between apical and basal dendrites.
Our approach enables more effective credit assignment and outperforms existing biologically plausible algorithms. As we mentioned in our 5th response, we will add the necessary discussion in the related work section.
## 2. In terms of performance, there are a series of local learning algorithms that outperform the proposed algorithm, like BurstProp (Payeur et al., Nat Neurosci, 2021), SoftHebb (Journe et al., ICLR 2023), and Counter-current learning (Kao & Hariharan, NeurIPS, 2024)
To ensure that all algorithms were evaluated under consistent conditions, we use the same simple CNN architecture for CIFAR-10 (Line 828), and did not use training techniques such as residual connections or batch normalization.
BurstProp and SoftHebb can be seen as advancements of STDP and Hebbian learning algorithms, both of which satisfy all three of our proposed criteria.
We aim for our proposed DLL to possess general capabilities comparable to those of backpropagation (BP), including the ability to perform both classification and regression tasks, handle diverse modalities such as images and language, and support multi-layer credit assignment.
For example, in addition to image recognition tasks, our DLL framework can also be applied to train RNNs for regression tasks (Section 4.3), such as next-character prediction and time-series forecasting.
In contrast, methods like BurstProp and SoftHebb may struggle with such tasks, as their designs are not well-suited for recurrent architectures.
Counter-current Learning violates our third criterion, as it explicitly involves distinct forward and backward phases.
Additionally, their CNN architecture is based on VGG, and the authors employed training techniques such as batch normalization.
We intentionally avoided these techniques to ensure all algorithms were evaluated under consistent conditions. | Summary: The article proposes a neural network that is constructed using a local loss and asymmetric weights in the forward and backward passes. The author introduces three criteria for biological plausibility that the neural network should satisfy and demonstrates that the proposed DLL meets these criteria. The author also shows through experiments that DLL outperforms other non-traditional BP networks.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes, I checked the math and didn't find any obvious issues.
Experimental Designs Or Analyses: The author demonstrates the advantage of DLL over other non-traditional BP methods through experiments, but I think more ablation studies need to be added to prove that the three design criteria for biological plausibility of DLL contribute to improving the model's performance. This will enhance the soundness of this work.
Supplementary Material: Yes, Appidx A
Relation To Broader Scientific Literature: I think the research in this paper is related to brain-inspired computation and computational neuroscience.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths
- The article summarizes the characteristics of some non-traditional BP learning methods, extracts three criteria, and designs DLL based on them. Experimental results demonstrate that DLL can outperform previous similar methods.
- In today's world where SGD is widely used, exploring new learning methods is refreshing and can increase attention to learning approaches within the field.
Weaknesses
- I think the main issue is that the author should clarify why the three criteria for biological plausibility are important, for example, through ablation studies or theoretical proofs. This would enhance the scientific value of the paper.
- Although biological plausible local loss and asymmetric weights are used, gradient descent is still employed for training in this paper. Could this be the main reason why DLL outperforms previous work? An important criterion of brain-inspired learning rules is to abandon GD (such as STDP), because the brain does not directly compute gradients.
Other Comments Or Suggestions: No.
Questions For Authors: Please see weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions, which are valuable for enhancing our paper. We are pleased that our contributions are well recognized.
Responses to your concerns and questions are hereby presented:
### 1. Why are the three criteria for biological plausibility important? More ablation studies are needed to prove that the three criteria contribute to improving the model's performance.
Thanks for your suggestion. The three criteria outlined in our paper were summarized from existing biologically plausible learning algorithms, rather than introduced as a means to enhance model performance.
Our primary objective is to investigate the biological plausibility of learning algorithms. Based on this investigation and the biological limitations of existing methods, we propose a novel approach that satisfies all three criteria—making it more biologically plausible—while achieving performance comparable to backpropagation.
We have conducted an ablation study on the backward weight $\Theta$ in Table 3.
Could you please provide any suggestions on how to design further ablation experiments?
### 2. Gradient descent is still employed for training. Could this be the main reason why DLL outperforms previous work?
Thank you for your insightful comment.
We agree that STDP is one of the most biologically plausible learning algorithms, as it not only avoids gradient descent but also satisfies the three criteria we summarized.
From the perspective of update rules, the high performance of our method stems from the local neuronal plasticity update rules we derived (Equations 8 and 9), rather than relying on a global gradient backpropagation process.
We believe that local updates to neuronal plasticity are consistent with the form of computing gradients on local losses, which is a plausible mechanism that could have emerged through long-term evolution.
Similar approaches have been employed in other biologically plausible learning methods, such as predictive coding, target propagation, and local losses.
Although gradient descent is not used in STDP, it does not necessarily mean the neuronal plasticity rules can not be derived locally by using gradients.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed my concerns to some extent. However, given the relatively toy-like structure and the noticeable performance gap, I prefer to keep my original score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your response and for acknowledging that we have addressed your concerns to some extent.
We sincerely appreciate your feedback and would like to respectfully emphasize that our proposed learning algorithm outperforms existing biologically plausible algorithms that satisfy the three criteria outlined in our study.
Moreover, similar to backpropagation, our algorithm is general-purpose: it can be applied to both classification and regression tasks, and is suitable for different modalities including language, time series, and vision.
In contrast, other biologically inspired approaches—such as those based on STDP or Hebbian learning—are typically limited in scope and primarily applicable to image classification tasks.
Thank you again for your thoughtful feedback. | Summary: The paper introduces Dendritic Localized Learning, as an alternative to backpropagation for training neural networks. The goal is to make learning more biologically realistic by addressing three main issues with backpropagation: the requirement of weight symmetry between the forward and backward passes, the use of global error signals, and the separation of learning into distinct forward and backward phases. The authors propose a model inspired by pyramidal neurons, which has separate compartments for different types of information processing. Instead of using the transposed forward weights for the backward pass, DLL introduces a separate set of trainable backward weights. The authors present experiments that show DLL performing better than other biologically plausible learning algorithms while coming close to backpropagation in accuracy. They apply DLL to multilayer perceptrons, convolutional networks, and recurrent networks and compare it to alternatives like feedback alignment and target propagation.
## update after rebuttal
Thanks to the authors for the detailed and thoughtful response. I appreciate the additional experiments on text datasets, the clarification around computational costs, and the effort to demonstrate the stability and scalability of DLL. It’s clear that a lot of work went into addressing the concerns.
That said, my overall view remains the same. While the new experiments help strengthen the paper, the broader limitations — such as the performance gap on complex tasks, the limited scale of evaluation, and the lack of deeper theoretical guarantees — are still there. I still find DLL to be a promising step toward more biologically plausible learning, and the empirical results are encouraging within the scope tested. So, I’m keeping my original score.
Claims And Evidence: The authors claim that DLL satisfies all three criteria for biological plausibility and achieves strong performance on benchmark datasets. The claim that DLL removes the requirement of weight symmetry is supported by the introduction of independent backward weights, which is a reasonable approach. The claim that DLL eliminates the need for a global error signal is also valid because local errors are computed at each neuron. The claim that DLL allows simultaneous forward and backward computation is plausible but could use more experimental validation, especially regarding real-time learning. The claim that DLL achieves performance close to backpropagation is somewhat supported by the reported results, but there is still a noticeable accuracy gap, especially on more complex datasets. There is no strong theoretical guarantee provided for convergence or learning efficiency, which weakens the claim that DLL is a robust learning method.
Methods And Evaluation Criteria: The proposed method is appropriate for the problem of biologically plausible learning, and the chosen datasets provide a reasonable benchmark. However, the experiments focus mostly on small-scale datasets like MNIST and CIFAR-10. It would be more convincing to see results on more complex datasets like ImageNet or natural language processing tasks. The authors compare DLL against several well-known biologically inspired learning methods, which is a strong aspect of the paper. However, the evaluation mainly considers accuracy, and there is little discussion of computational cost, memory efficiency, or sensitivity to hyperparameters, all of which are important in assessing the practicality of a learning algorithm.
Theoretical Claims: The paper includes mathematical descriptions of DLL, but it does not provide a formal proof of convergence or an analysis of how DLL behaves in different training conditions. While the authors argue that DLL follows biologically plausible principles, they do not establish whether DLL optimizes a well-defined loss function in a way that guarantees stable learning. The absence of such analysis leaves a gap in understanding whether DLL is a reliable alternative to backpropagation or just an interesting theoretical idea with promising empirical results.
Experimental Designs Or Analyses: The authors conduct a reasonable set of experiments to compare DLL with other learning algorithms, but there are some issues. The choice of datasets is somewhat limited, as all datasets used are relatively small. There is no evaluation of DLL on tasks requiring deeper networks or larger-scale learning. The comparison to backpropagation is primarily based on accuracy, but efficiency is not analyzed. There is no indication of whether DLL requires significantly more training time or memory compared to standard backpropagation. Additionally, there is no ablation study to determine which aspects of DLL are most responsible for its performance. Without such analysis, it is unclear whether the proposed approach is truly necessary or if simpler modifications to existing biologically plausible learning rules could achieve similar results.
Supplementary Material: The provided code files were reviewed, including layer.py, model.py, and utils.py. The implementation appears to be functional, but there is a lack of documentation, which could make it difficult for others to reproduce the results. There are no details about hyperparameter selection or computational cost, which are important for understanding how DLL performs in practice.
Relation To Broader Scientific Literature: The paper positions DLL as an improvement over previous biologically plausible learning algorithms, particularly feedback alignment and predictive coding. The work is connected to existing neuroscience literature on pyramidal neurons and local synaptic plasticity, which supports its biological inspiration. However, there is little discussion of how DLL relates to other learning methods beyond biologically plausible algorithms. It would be useful to compare DLL’s principles with energy-based models or reinforcement learning approaches that also incorporate local learning rules.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: One of the strongest aspects of this paper is its attempt to provide a biologically inspired learning method that is more plausible than backpropagation while maintaining strong performance. The empirical results suggest that DLL is a promising alternative, and the idea of using separate backward weights is an interesting contribution. However, there are some weaknesses that need to be addressed. The paper lacks a theoretical analysis of DLL’s convergence and stability. The experimental evaluation does not include large-scale tasks, so it is unclear how well DLL scales to more complex problems. There is no discussion of the computational efficiency of DLL compared to backpropagation. The code provided is not well-documented, making it difficult to reproduce the results.
Other Comments Or Suggestions: The authors should include a discussion of DLL’s efficiency and computational cost relative to backpropagation. They should also provide more details on hyperparameter sensitivity and whether DLL requires careful tuning to perform well. The code should be better documented to improve reproducibility. It would also be useful to include a theoretical analysis of DLL’s stability and convergence properties.
Questions For Authors: How does DLL compare to backpropagation in terms of training time and memory usage? If DLL is slower, what are the trade-offs in terms of biological plausibility versus efficiency?
Would DLL generalize well to large-scale datasets like ImageNet or more complex NLP tasks? Have any experiments been conducted to test its scalability?
Does DLL have any robustness advantages, such as resistance to adversarial attacks or improved performance on noisy data?
Are there specific hyperparameters that need to be fine-tuned for DLL to work well, or is it stable across different settings?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: 1. More experimental validation, especially real-time learning, to show simultaneous forward and backward.
Our approach enables real-time learning, as higher-layer neurons propagate signals backward only when a discrepancy between the output and the label is detected. Otherwise, no adjustments are made, and no backpropagation signals are generated.
While this mechanism is theoretically sound, we are uncertain about the best way to empirically validate it.
We would greatly appreciate any suggestions on how to design experiments to test and confirm this behavior.
2. Accuracy gap on complex datasets.
We primarily focus on comparing the biological plausibility of various learning algorithms rather than optimizing for high performance.
Within these biological constraints, we propose the DLL algorithm, which further narrows the performance gap with BP.
As shown in Table 1, DLL achieves the best performance among algorithms that satisfy all three criteria.
3. Theoretical analysis for convergence.
Please refer to the 4th question of Reviewer 2B1A.
4. Complex datasets like ImageNet or NLP tasks are ignored.
Given our current computational resources (four 2080 GPUs with 12GB each), we estimate that conducting experiments on the full ImageNet within 7 days would not be feasible.
Therefore, we chose to evaluate our methods on Tiny-ImageNet.
The results are presented in our response to the third question from reviewer 2B1A.
For NLP tasks, we have followed [1] to conduct experiments on the "next-character-prediction" task in Section 4.3.
Additionally, we performed experiments on the text classification datasets Subj and Movie Review (MR), with results summarized below:
|Method|Subj|MR|
|-|-|-|
|BP_TextCNN|88.50%|74.68%|
|DLL_TextCNN|84.40%|70.79%|
The architecture is the same as the original TextCNN (Kim, EMNLP2014).
5. Computational cost, memory efficiency, or sensitivity to hyperparameters
The time consumption and memory usage for these experiments are summarized below:
|Method|Time Consumption (s/epoch)|Memory Usage (MB)|
|-|-|-|
|DLL_MLP|44.7|1595.3|
|DLL_CNN|169.8|1306.9|
|BP_MLP|31.6|1286.4|
|BP_CNN|99.0|1272.9|
To fairly compare time consumption across architectures, we used the CPU instead of the GPU.
DLL requires more training time and memory because both the forward weight $\mathbf{W}$ and backward weight $\Theta$ are updated simultaneously.
Our design is not driven by computational or memory efficiency; rather, we prioritize biological plausibility.
As for sensitivity to hyperparameters, we have included a comparison of various learning rates and sequence lengths in Figure 3.
6. The authors do not establish whether DLL optimizes a well-defined loss function in a way that guarantees stable learning
Figure 2(c) shows that models trained with DLL exhibit a stable decrease in training loss.
This figure is based on time-series forecasting experiments using RNNs on the Electricity dataset.
Similarly, for MLPs and CNNs, the training losses also show a consistent downward trend.
We will release all training logs upon acceptance.
7. No ablation study to determine which aspects of DLL are most responsible for its performance.
Our DLL algorithm is designed based on three criteria that we have summarized from current biologically plausible algorithms.
It can be viewed as a faster (without iteration) and more biologically plausible implementation of predictive coding (PC), leveraging the unique properties of pyramidal neurons. Our design removes weight symmetry and separates forward and backward computations.
As a result, the performance of DLL is similar to that of PC under certain conditions, and PC has been shown to achieve performance comparable to backpropagation [1][3].
We have conducted an ablation study on the backward weight $\Theta$ in Table 3.
We would appreciate any suggestions on additional ablation studies.
8. Code lacks documentation. No details about hyperparameter selection.
Upon acceptance, we will release our code with detailed documentation for reproducibility.
The selection of hyperparameters follows the similar choices made in [1][2].
9. Discussion of other learning methods.
We will add related papers on apical dendrites.
Please refer to the 5th question of reviewer Crph.
10. Robustness and scalability?
Robustness is an interesting aspect, and we plan to explore it in future work.
We conduct scalability experiments:
|DLL-MLP|MNIST|
|-|-|
|784-1024-10|71.15%|
|784-1024-512-10|89.61%|
|784-1024-512-256-10|97.57%|
All MLPs are trained fairly, and the results show the scalability of DLL.
[1] Millidge B, et al. Predictive coding approximates backprop along arbitrary computation graphs. Neural Computation, 2022.
[2] Bartunov S, et al. Assessing the scalability of biologically-motivated deep learning algorithms and architectures. NeurIPS, 2018.
[3] Salvatori T, et al. A Stable, Fast, and Fully Automatic Learning Algorithm for Predictive Coding Networks. ICLR, 2024. | Summary: The paper introduces Dendritic Localized Learning (DLL), a biologically plausible learning algorithm inspired by the structure and plasticity of pyramidal neurons. The motivation behind DLL is to address three fundamental biological limitations of backpropagation:
- Weight symmetry – Backprop requires symmetric forward and backward weights, which is not biologically plausible.
- Global error signals – Biological neurons primarily use local learning rules rather than propagating global error through all the layers.
- Dual-phase learning – Backprop separates forward and backward passes, whereas biological learning does not have such a strict separation.
The proposed DLL framework models neurons with three compartments (soma, apical dendrite, basal dendrite) and introduces trainable backward weights instead of using transposed forward weights in error propagation. The algorithm enables local error computation within neurons and allows for simultaneous weight updates, satisfying biological plausibility criteria.
## update after rebuttal
I appreciate the authors effort in rebuttal. While I understand the importance of maintaining uniformity across algorithms, I do believe that that for a new learning algorithm, it is also useful to measure or discuss the compatibility with the standard regularization techiniques like weight decay, batch normalization. Also, I don't fully agree that adding dropout for instance makes the approach biologically implausible. Noise is a salient feature of the learning machinery of the brain and dropout, adds a source of noise.
I am not sure if it is a typo, but the authors report DLL to be outperforming on the most challenging dataset (TinyImageNet) which seems highly unlikely and the numbers are quite high compared to Cifar10 and Cifar100. It is also a bit concerning to see model performing better on TinyImageNet compared to Cifar10.
With these concerns remaining, I will retain my original score and do not feel confident to champion the paper for acceptance.
Claims And Evidence: The authors claim that DLL satisfies all three criteria for biological plausibility while maintaining strong empirical performance compared to prior biologically plausible learning algorithms. They provide:
- Mathematical derivation of the DLL learning rule and its update equations.
- Empirical performance on MLPs, CNNs, and RNNs across various datasets.
- Comparisons with other biologically plausible algorithms, showing that DLL achieves superior accuracy among methods that meet all three criteria.
Methods And Evaluation Criteria: The experimental methodology is well-structured and covers different architectures (MLPs, CNN and RNNs) and datasets. This showcases the versatility and applicability of their learning algorithm.
Theoretical Claims: The paper presents a mathematical derivation of the DLL learning rule and its update equations. Reviewer did not verify the correctness of their derivation.
Experimental Designs Or Analyses: Overall, the experimental design in sound. I have a few concerns:
- The performance of backpropagation on cifar10 with CNN (75%) seems quite low. Can the authors provide any justification for this,
- Authors do not discuss the effect of common regularizations like weight decay or batch normalization. Does it work for modern architectures like ResNets?
- It would be insightful to see how DLL performs on more complex datasets like Cifar100 or TinyImageNet.
Supplementary Material: I reviewed sections A, C and D.
Relation To Broader Scientific Literature: The paper clearly situates DLL within the field of biologically plausible learning.
Essential References Not Discussed: Not to reviewers' knowledge.
Other Strengths And Weaknesses: - DLL provides a promising approach which fulfills biologically plausibility criterion and provides comparable performance.
- Well written and structured and easy to follow
For weaknesses, see the concerns mentioned above.
Other Comments Or Suggestions: In addition to addressing the concerns raised above, manuscript would benefit from
- Discussion on convergence properties and guarantees would make the manuscript much stronger.
- Discussion on how the authors believe the performance gap between backpropagation and DLL can be bridged and its applicability to modern architectures and complex datasets.
Questions For Authors: Q1) Can the author comment on the applicability of DLL on deeper CNNs and modern architectures like ResNets.
Q2) Can the authors explain the low performance with backpropagation on CIfar10 with CNN. And the effect of weight decay and batch normalization on DLL performance and convergence.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ### 1. Performance of BP on CIFAR10 with CNN (75%) is low.
Firstly, as stated in Line 325 (left column), to ensure fair comparisons between different algorithms, we used the same architecture for all methods on a given dataset.
Specifically, for CIFAR-10, we employed a CNN with three convolutional layers (see Appendix C.2.2) without incorporating deep learning techniques such as batch normalization, residual connections, or dropout.
This design choice was made to maintain full biological plausibility and prevent potential biases that could arise from the selective application of these techniques across different algorithms.
Under these conditions, the accuracy achieved by our BP-CNN on CIFAR-10 is reasonable.
Secondly, previous studies like [1][2][3], which also did not incorporate additional training techniques, reported test accuracies of only achieving 60%-70% on CIFAR10 using similar CNN architectures.
Our primary objective was to establish a fair and unbiased evaluation framework, ensuring that comparisons reflect the intrinsic differences between algorithms rather than the influence of external training enhancements.
### 2. Effect of common regularizations like weight decay or batch normalization. Does DLL work for modern architectures like ResNets?
As mentioned in the last question, we did not incorporate additional training techniques.
If we were to use them, special design considerations would be necessary, as they may violate certain criteria.
For example, batch normalization and dropout behave differently during training and testing, and directly applying them to our DLL framework would break Criterion 3 (non-two-stage training).
We acknowledge that incorporating such techniques, particularly ResNets, could be beneficial for future optimizations and will consider them in subsequent work.
However, our primary objective in this paper was to explore and evaluate the biological feasibility of the algorithms without relying on external enhancements.
### 3. How does DLL perform on more complex datasets like CIFAR100 or Tiny-ImageNet?
We followed your suggestions and trained CNNs with BP and DLL on CIFAR-100 and TinyImageNet.
Consistent with previous work [1], we report test accuracy for CIFAR-100 and test error rate for TinyImageNet.
|Method|CIFAR100 (test accuracy)|TinyImageNet (test error rate)|
|-|-|-|
|BP_CNN|44.5%|78.6%|
|DLL_CNN|38.6%|82.9%|
Note that we do not use any additional training techniques such as batch normalization or residual connections. Our CNN architecture is similar to that in [1].
For CIFAR-100, the CNN consists of four convolutional layers with channel configurations of 3-64-64-128-64, followed by two fully connected layers.
For TinyImageNet, the CNN consists of five convolutional layers with filter configurations of 3-64-64-128-128-64, followed by two fully connected layers.
### 4. Discussion on convergence properties and guarantees would make the manuscript much stronger.
Thanks for your advice. We will add the following discussion on convergence properties and guarantees:
The loss function (Eq. 3) is designed to minimize the discrepancy between the top-down predictions and bottom-up outputs of each pyramidal neuron in the network.
To achieve this, we employ local gradient descent–based learning rules and neural plasticity mechanisms to update both forward and backward weights.
During each iteration, the differences between the network’s predictions and the ground truth propagate back through localized errors, effectively coordinating all neurons in an orchestrated manner.
As a result, neural responses collectively refine predictions over successive iterations, gradually reducing local errors and driving the network toward convergence.
While providing formal convergence proofs remains challenging due to the network’s nonlinear operations, our empirical results consistently demonstrate a steady decrease in loss throughout training, supporting the stability and effectiveness of our approach.
### 5. How can the performance gap between BP and DLL be bridged?
Currently, most biologically plausible learning algorithms exhibit a significant performance gap compared to BP. In this study, our primary goal is to move closer to BP while preserving biological plausibility.
If the sole objective were performance improvement, specialized training techniques such as normalization and dropout—adapted to our proposed DLL algorithm—could be explored in future work.
Reference
[1] Bartunov S, Santoro A, Richards B, et al. Assessing the scalability of biologically-motivated deep learning algorithms and architectures. Advances in neural information processing systems, 2018, 31.
[2] Millidge B, Tschantz A, Buckley C L. Predictive coding approximates backprop along arbitrary computation graphs. Neural Computation, 2022, 34(6): 1329-1368.
[3] Salvatori T, Song Y, Yordanov Y, et al. A Stable, Fast, and Fully Automatic Learning Algorithm for Predictive Coding Networks. ICLR, 2024 | Summary: The paper focuses on biological plausibility. Looking at prior work, this paper proposes three different metrics for biological plausibility including (i) asymmetry between the forward and the backward weights, (ii) local losses, and (iii) non two-stage training. With that, the paper proposes a new learning system i.e., Dendritic Localized Learning (DDL), which they compare against other local learning approaches as well as end-to-end backpropoagation. The obtained results on computer-vision classification datasets, time-series forecasting datasets, and NLP datasets highlights that potential of DDL in comparison to other local learning approaches.
Claims And Evidence: While the general claims are well-supported, the claim regarding the three properties being sufficient is really weak without any citation. I attempted to elaborate on that more in the later sections.
Methods And Evaluation Criteria: Yes
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes. The evaluation and analysis is straightforward as the paper focuses on standard benchmark datasets.
Supplementary Material: No
Relation To Broader Scientific Literature: The claim of a new contribution based on the analysis of properties of local learning system ignores other work in this space. This is not a unique contribution of the paper as previous papers did the same. Furthermore, the three learning criterions covered in the work can't be considered complete (e.g., see https://arxiv.org/abs/2302.01647 -- I'll cover this more in the weaknesses section). Finally, the paper ignored all of the recent local learning methods, and only focused on old ones. Particularly, the paper ignored all methods that attempt to scale to larger datasets (e.g., see https://arxiv.org/abs/2008.01342 and https://arxiv.org/abs/2302.01647). Local losses are a very active area of research. The covered literature on this is very weak with just one paper.
Essential References Not Discussed: Yes, quite a lot of them. The paper didn't cover any of the most recent methods in this space, with almost all comparative analysis limited to papers from the late 2010s. E.g., local losses itself is a very active space, with LoCo being one of the most prominent papers in this space: https://arxiv.org/abs/2008.01342). There are now many extensions of this. I also elaborate on this in my other comments.
Other Strengths And Weaknesses: # Strengths
- Well motivated problem
- Simple and well-motivated method
- Well-written paper
- Comprehensive coverage in terms of evaluation
# Weakness
- Poorly supported claim of just three essential properties without any citation from the neuroscience literature that argues about the biological plausibility of these ideas. They seemed to emerge out of the blue. There are many other criterions explored in prior work. E.g., https://arxiv.org/abs/2302.01647 argued that self-supervised learning and no dependence within layers are also essential for biological plausibility. Hence, the contribution of estimating essential properties based on prior work is not the first. Furthermore, it is hard to claim that the list is complete with these three properties.
- Ignores almost all of the recent related work in this space. Particularly, there has been a lot of focus on recent methods that attempt to make local learning methods scale to large-scale settings (such as ImageNet), while the papers cited in the current work even fail to work on MNIST at times. With correct local learning methods that detach one layer from another, they do satisfy C3 naturally as there is no end-to-end update.
- Weak architecture selection that already results in poor performance as a start, even with end-to-end BP. E.g., in table 1, the performance of CIFAR-10 with a CNN is just 75%, which is really weak.
Other Comments Or Suggestions: - Line 263 (left column): mention that 'f' represents the non-linearity (which is clarified in Algorithm 1)
- Line 318 (left column): "correctly backpropagated" -> "correct predicted"
Questions For Authors: - How were the architectures selected? Why is the performance of the selected architectures poor on CIFAR-10?
- How were the hyper parameters tuned? Were they the same for all methods?
- Can this method be adapted to residual networks?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: ### 1. Poorly supported claim of 3 criteria without any citation. There are many other criteria explored in prior work.
Firstly, in Section 2.2, we provide a detailed introduction and evaluation of all representative biologically plausible learning algorithms. These algorithms served as the basis for the three criteria, as mentioned in Line 33 (left column). However, we acknowledge that we should have included more explicit citations before discussing these criteria in Section 2.1, and we appreciate the reviewer’s suggestion in this regard.
Secondly, our work specifically focuses on biologically plausible supervised learning algorithms. For instance, when discussing Hebbian Learning and STDP, we highlighted their limitation in leveraging supervised learning signals (Line 188, left column). While we acknowledge that studies such as [4][5] have proposed alternative criteria—such as self-supervised learning and independence between layers—we emphasize that these approaches are often based on unsupervised learning. We fully recognize the value of these criteria in assessing biological plausibility; however, it is challenging to incorporate all possible criteria within a single paper.
To address this, we will add a discussion and cite relevant studies in our revised manuscript to clarify our motivation. We appreciate the reviewer’s insightful feedback.
### 2. Ignorance of recent related work [3][4] on local learning. Local learning methods satisfy C3 naturally as there is no end-to-end update.
Firstly, as mentioned in the last question, our work focuses on supervised learning algorithms.
[3] proposed a layer-wise self-supervised pre-training algorithm based on random masking and image recovery, while [4] explored deepening model layers within unsupervised contrastive learning frameworks.
Although these methods help scale local learning approaches to larger settings, their biological plausibility is limited, as they do not fully adhere to Criteria 1 and 3.
In this study, our primary goal is to take a step closer to BP while maintaining biological plausibility.
To achieve this, we designed the DLL algorithm based on three key criteria derived from existing biologically plausible learning algorithms.
That said, we acknowledge the relevance of [3] and [4] and will incorporate a discussion of these works in Section 2.2 and the related work section.
Secondly, we define Criterion 3 as non-two-stage training, meaning there is no strict temporal segregation between forward and backward processes.
We believe that not all local learning methods satisfy this criterion. For instance, difference target propagation is a local learning method but does not meet Criterion 3, as we discussed in Line 143 (right column).
While the training methods in [3] and [4] do not involve end-to-end updates, they still exhibit a clear separation between forward and backward processes.
### 3. Writing suggestions: Line 263 and Line 318.
Thanks for pointing them out. We will correct them in our revised manuscript.
### 4. Why is the performance of the selected architectures poor on CIFAR-10?
Please refer to the first question of reviewer 2B1A.
### 5. How were the architectures selected? How were the hyperparameters tuned? Were they the same for all methods?
Yes, as stated in Line 325 (left column) and Appendix C.2, we ensured that all algorithms were evaluated using the same hyperparameters and architectures for a given dataset.
Our choices for architecture and hyperparameter settings were based on the methodologies outlined in [1][2].
### 6. Can this method be adapted to residual networks?
We recognize that there is biological evidence supporting inter-regional and inter-layer neuronal connections.
However, whether these connections function equivalently to residual connections in deep learning remains an open question.
Residual connections were originally introduced to mitigate the gradient vanishing problem in deep learning models, which is less of a concern for local learning-based approaches like ours.
As a result, their potential benefits may not be as relevant or impactful in our setting.
We acknowledge the importance of exploring such architectural enhancements.
In future work, we will consider incorporating residual connections to further investigate their potential impact and applicability.
### Reference
[1] Bartunov S, Santoro A, Richards B, et al. Assessing the scalability of biologically-motivated deep learning algorithms and architectures. NeurIPS 2018.
[2] Millidge B, Tschantz A, Buckley C L. Predictive coding approximates backprop along arbitrary computation graphs. Neural Computation, 2022, 34(6): 1329-1368.
[3] Siddiqui S, Krueger D, LeCun Y, et al. Blockwise Self-Supervised Learning at Scale. TMLP.
[4] Xiong Y, Ren M, Urtasun R. Loco: Local contrastive representation learning. NeurIPS, 2020 | null | null | null | null |
False Coverage Proportion Control for Conformal Prediction | Accept (poster) | Summary: The authors propose using the Joint Error Control (JER) framework of Blanchard et al. (2020) to control the false coverage proportion (FCP) across multiple conformal prediction intervals. This approach leverages the exact joint distribution of conformal $p$-values derived in Gavin et al. (2024). They introduce a specific instantiation of the JER method, selecting a particular "template" and "threshold function," which they argue is optimal or tight, though this claim is not formally proven. Additionally, they propose a method for aggregating conformal $p$-values and intervals while ensuring that both the FCP and JER remain controlled. In experiments, the proposed methods outperform uncorrected approaches and existing, more conservative, methods.
Claims And Evidence: The authors claim that their approach is expected to be tighter than that of Gazin et al. (2024). Why is this the case? Can you provide a formal result?
Additionally, Section 1 states that “while the approach of Gazin et al. (2024) yields valid FCP bounds, they are fully parametric, which can entail conservativeness, as discussed by the authors.” However, upon reviewing their paper, they also consider distribution-free conformal prediction settings, and I do not see any explicit dependence on parametric assumptions. Can the authors clarify what they mean by "parametric" in this context and explain why they consider the approach of Gazin et al. (2024) to be conservative?
Methods And Evaluation Criteria: NA
Theoretical Claims: NA
Experimental Designs Or Analyses: The experiments appear sound and comprehensive.
Supplementary Material: NA
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: NA
Other Strengths And Weaknesses: ### Strengths
- The application of the Joint Error Control (JER) framework of Blanchard et al. (2020) to conformal prediction is novel
- The proposed aggregation scheme for conformal $p$-values and intervals that ensures control of bot FCP and JER is novel and practically useful.
### Weaknesses
- It is not explained how calibrated conformal prediction intervals are derived from the calibrated $p$-values and the threshold family. How should the threshold family output by Algorithm 3 be used to construct conformal intervals with JER control? Without explicitly defining these intervals, the utility of Proposition 3 is unclear. Am I supposed to apply Corollary 1 with the adjusted interval? If so, this interval does not appear to be explicitly defined.
- The writing is somewhat unclear, and the paper could benefit from additional intuition and background on multiple testing methods. For instance, the motivation for introducing threshold families and templates, as well as their utility, may not be immediately clear to readers.
- The theoretical results primarily build on or modify existing work, and some key formal results are missing. Specifically:
1. There are no optimality properties established for the template proposed for JER control in Section 3.3.
2. There are no formal results justifying the claims of tightness and sharpness of the proposed intervals compared to existing methods (e.g., Gazin et al., 2024).
3. There are no formal or informal results demonstrating that the model aggregation method preserves the coverage properties, although JER control is established. If marginal coverage follows from JER control this should be stated.
Other Comments Or Suggestions: - In Lemma 1, how are \( p_1, \dots, p_m \) defined? Is \( p_j \) equal to \( P(Y_j) \), where \( P(\cdot) \) represents the p-value function?
- The concepts of threshold families and templates may be abstract for those unfamiliar with this literature. Providing some context on why these notions are introduced and their practical applications could be helpful.
- To facilitate broader adoption and improve accessibility within the CP community, it might be useful to present a less abstract version and intuition of the algorithm, e.g., introduce the method for a simple template and threshold function. Is the general idea to decrease the rejection thresholds of the p-values monotonically, thereby effectively lowering the significance (alpha) levels of the CP intervals? How does this procedure compare to and differ from the Benjamini-Hochberg method, which may be more familiar to readers?
- For the template constructed using Monte Carlo simulation described above in Proposition 3, is this template optimal in power when the number of MC replicates ($B$) approaches infinity? It would be beneficial to formally define the optimal template function, as the informal statement that it must "match the shape of the distribution of the $p$-values under exchangeability" is unclear. Does the proposed template approximate such an oracle template? Formal results on the optimality of the procedure or an oracle variant of it would be appreciated.
- Does the model aggregation scheme preserve the coverage properties of CP? Does this follow from JER control?
- The title does not seem appropriate, as there are no tightness guarantees and "reliability" is vague and CP is arguably reliable as is. The title should reflect the actual contributions of the paper, e.g., something like "Controlling the FCP/JER of CP".
Questions For Authors: - Once we have obtained the calibrated p-values using Algorithm 3, how do we derive the adjusted conformal prediction intervals? Are the sequence of \( t \)'s the new alpha values for the corresponding sequence of prediction intervals? Does Algorithm 3 provide a calibrated p-value function that can be used to construct the new interval following the approach in Definition 2? It would be helpful to include an explicit end-to-end algorithm for constructing the calibrated intervals.
- In proposition 2, does the definition of the interval C(alpha) depend on the threshold family? In Section 3.3, it sounds like the idea of the proposed method is to find a threshold family whose JER is controlled at a desired level. However, as mentioned above, it is not clear how this threshold family is mapped to new intervals.
- It would be beneficial to present a formal result in Section 3.3 on the JER control of the proposed method by combining Proposition 3 with Corollary 1. Specifically, for the method proposed by the authors, what is the value or order of \( j(\alpha, \delta) \)? How does the bound behave when \( \delta = 1/\sqrt{n} \) or \( \delta = 1/n \)?
# Post review
I have raised my score from a 2 to a 3 and am positive of the work.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and insightful comments. Please find our answers to the points raised below.
>In Lemma 1, how are ( p_1, \dots, p_m ) defined? Is ( p_j ) equal to ( P(Y_j) ), where ( P(\cdot) ) represents the p-value function?
For each $j \in [[m]]$, we define
$p_j := \frac{1}{n+1} \left(1 + \sum_{i=1}^n \mathbf{1}[S_j \le S_i]\right)$ as per Definition 1. We have added this to Lemma 1 in the updated version of the manuscript.
> The concepts of threshold families and templates may be abstract for those unfamiliar with this literature. Providing some context on why these notions are introduced and their practical applications could be helpful.
We will add a pedagogical figure that explains the procedure in two steps in the updated manuscript -- we detailed this in our answer to reviewer DTau. In a nutshell, the threshold family can be viewed as a (vector) parameter, and templates are sets of candidate parameters, from which a candidate ensuring JER control (and hence FCP coverage) is chosen by the calibration algorithm.
**Regarding the "parametric" nature of the Gazin et al. bound**, we agree that the use of "parametric" can be confusing here. Indeed, the paper of Gazin et al. (2024) considers distribution-free conformal prediction settings. We wanted to emphasize that the **confidence envelope of $\widehat{F}_m$ they propose has a fixed shape**. Theorem 2.3 of Gazin et al. (2024) is a Dvoretzky–Kiefer–Wolfowitz (DKW) type inequality, proved to hold for the specific (known) dependence between conformal $p$-values. As noted in that paper (see Remark 2.6 therein), this approach can be conservative, which can be addressed by the JER calibration framework. CoJER is fully non-parametric in the sense that the shape of the tempate is itself derived from the joint distribution of $p$-values, and not chosen *a priori*. Our experimental results show that CoJER is much less conservative than the approach of Gazin et al. (2024) while preserving FCP control.
**Explicit construction of the confidence intervals:** we thank the reviewer for pointing this out. Once Algorithm 3 is run, we obtain a JER controlling family $t$. Corollary 1 states that, with probability at least $1-\delta$, $\forall \alpha \in[0,1], \quad \operatorname{FCP}(\mathcal{C}(\alpha)) \leq \frac{j(\alpha, \delta)}{m}$ with $j(\alpha, \delta) = \min \{j \in [[m]] : \alpha \leq t_j\}$. In turn, to obtain FCP control at level $\alpha$ with probability $\geq 1 - \delta$ we choose $\widehat{\alpha} = t_{\lfloor \alpha m\rfloor}$. Therefore, the FCP-controlling intervals can be explicitly written $\mathbf{\mathcal{C}(\widehat{\alpha})} = (C_{i, \widehat{\alpha}})_{i \in [[m]]}$ with $C_{i, \widehat{\alpha}} = \left[\hat{\mu}\left(X_{n+i}\right) \pm S_{(\lceil(n+1)(1-\widehat{\alpha})\rceil)}\right]$. The confidence intervals only depend on the threshold family via $\widehat{\alpha}$. We have added this explanation to the updated version of the manuscript.
**Template optimality**: while our experiments unambiguously show that CoJER leads to tighter FCP control than the bounds of Gazin et al, we currently do not have theoretical support for this result. In fact, even the formal definition of an optimal shape is not trivial, and we have left this exciting perspective for future work.
> Does the model aggregation scheme preserve the coverage properties of CP? Does this follow from JER control?
CoJER does not ensure a marginal coverage guarantee for each test observation when $m>1$. Indeed, we argue that this type of guarantee is not interpretable in the transductive case considered here. Instead, CoJER offers a strong probabilistic guarantee (FCP control) over the entire set of $m$ test observations. However, note that for a single observation ($m=1$), FCP control is equivalent to the marginal coverage guarantee offered by SCP.
**Significance level and comparison to the Benjamini-Hochberg procedure**: indeed, the reviewer's intuition is correct. CoJER builds a JER controlling family $t$ and outputs an adjusted level $\widehat{\alpha} = t_{\lfloor \alpha m\rfloor}$ for which the confidence intervals $C(\widehat{\alpha})$ controls the FCP at level $\alpha$. This is markedly different from the Benjamini-Hochberg (BH) procedure. First, BH outputs a **rejection set** for which there is a statistical guarantee. In the transductive setting of conformal prediction, we are interested in **obtaining a guarantee for all test points simultaneously** and not only for a certain subset. This renders BH and other similar methods unapplicable in this context. Second, BH offers a guarantee on the **expected** proportion of False Discoveries and not in probability.
**Title**: we agree that the title is too vague and doesn't clearly distinguish the contribution. We propose to rename the paper **False Coverage Proportion control for Conformal Prediction**. To this end, we have sent a message to the PCs of the conference.
---
Rebuttal Comment 1.1:
Comment: Thank you for the helpful clarifications. I have raised my score. The authors have addressed my concerns. Overall, I think the paper makes a noteworthy contribution, and the authors' revisions help address my main concern, which was the clarity of writing.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their prompt response and for raising their score. We remain at the reviewer's disposal should they have any additional questions. | Summary: The paper introduces CoJER (Conformal Joint Error Rate), a novel method designed to improve the reliability of split conformal prediction (SCP) by controlling the False Coverage Proportion (FCP). While traditional SCP ensures marginal coverage over multiple test points, it does not guarantee that the proportion of test points outside their confidence intervals remains controlled with high probability. This limitation is crucial in real-world settings, where multiple predictions are made simultaneously, such as in healthcare and finance.
The authors propose CoJER, which leverages conformal p-values and Joint Error Rate (JER) control to obtain tight FCP bounds. Unlike prior work (e.g., Gazin et al., 2024), which uses fully parametric bounds, CoJER provides a nonparametric calibration procedure that ensures sharper confidence intervals. The method is extended to model aggregation, allowing for robustness across different predictive models. Extensive experiments on 17 OpenML datasets demonstrate that CoJER achieves better FCP control while maintaining shorter confidence intervals than existing methods.
Claims And Evidence: The claims made in the paper are generally well-supported by theoretical and empirical evidence. The theoretical guarantees for FCP control are rigorously derived, and the connection between conformal p-values, JER control, and FCP bounds is well-established. The proposed method is thoroughly compared to existing SCP-based approaches, particularly the parametric method of Gazin et al. (2024), which is known to be overly conservative.
Empirically, the extensive evaluation across multiple datasets and models confirms that CoJER:
Achieves valid FCP control while standard SCP fails.
Produces shorter confidence intervals compared to existing FCP-controlling methods.
Remains robust across different model choices through the aggregation framework.
However, one potential limitation is that the paper primarily evaluates performance on tabular regression datasets. The applicability of CoJER to classification problems or high-dimensional deep learning models is not explored in depth.
Methods And Evaluation Criteria: Yes, the methodology and evaluation criteria are well-aligned with the problem. The paper focuses on realistic tabular prediction tasks where ensuring tight and reliable confidence intervals is critical. The choice of OpenML datasets provides a diverse set of benchmarks, making the results more generalizable.
The evaluation metrics—FCP control, interval length, and empirical coverage rates—are appropriate for assessing the effectiveness of the proposed method. However, additional benchmarks on structured datasets (e.g., time-series, NLP tasks) could further strengthen the paper’s claims about general applicability.
Theoretical Claims: The proofs presented in Sections 3 and 4 appear to be mathematically sound. The derivation of JER-based FCP bounds follows from standard techniques in conformal inference and multiple testing. Specifically:
Proposition 1 (characterization of the joint distribution of conformal p-values) is correctly cited from Gazin et al. (2024).
Proposition 2 (link between FCP and JER) follows logically from Lemma 1 and prior work on multiple hypothesis testing.
Proposition 3 & 4 (nonparametric JER control) leverage Monte Carlo approximations, which are empirically validated.
I did not find any major issues in the proofs, but a more detailed discussion on the asymptotic behavior of CoJER for large-scale datasets would be beneficial.
Experimental Designs Or Analyses: The experimental design is robust and methodologically sound:
Multiple datasets: The evaluation on 17 OpenML datasets ensures that results are not dataset-specific.
Multiple models: The experiments consider Random Forest, MLP, SVR, KNN, and Lasso, providing insights into method robustness across different modeling paradigms.
Multiple baselines: The comparison includes standard SCP, the Gazin et al. (2024) method, and CoJER, making it comprehensive.
One area for improvement is the lack of ablation studies on:
The effect of different choices of transformation functions in CoJER.
The impact of varying JER thresholds on FCP control.
A deeper exploration of these factors could provide more insights into when and why CoJER outperforms existing methods.
Supplementary Material: Yes, I reviewed the Appendix, which contains:
Proofs for the theoretical results (Appendix A).
Algorithms for JER estimation and calibration (Appendix B).
Extended experimental results with additional breakdowns.
The supplementary material adds valuable clarity to the main text.
Relation To Broader Scientific Literature: This paper builds upon two key strands of research:
Conformal Prediction: The work extends Split Conformal Prediction (Lei et al., 2018; Vovk et al., 2005) to handle False Coverage Proportion (FCP) control.
Multiple Testing & JER Control: The approach adapts Joint Error Rate control (Blanchard et al., 2020) to derive nonparametric FCP bounds.
The primary novelty lies in:
Reformulating FCP control using conformal p-values and JER-based calibration.
Providing tighter, nonparametric bounds compared to prior parametric approaches.
Extending conformal aggregation techniques to improve robustness across models.
Essential References Not Discussed: The paper provides a comprehensive review of prior work, but could benefit from including:
Adaptive Conformal Inference: Angelopoulos et al. (2023) propose an adaptive method for controlling conformal prediction widths, which may be relevant for understanding how CoJER adapts to different datasets.
Resampling-based Conformal Methods: Recent work on Jackknife+ (Barber et al., 2021) explores alternatives to split conformal prediction, which could be useful for comparison.
Other Strengths And Weaknesses: Strengths:
Well-motivated problem: FCP control is a crucial extension to standard SCP.
Mathematically rigorous: The derivations are well-grounded in statistical theory.
Efficient and practical: CoJER remains computationally comparable to SCP.
Weaknesses:
Limited exploration of classification problems.
No ablation studies to test different parameter choices.
Monte Carlo estimation reliance—more discussion on computational trade-offs is needed.
Other Comments Or Suggestions: Figure 1 & 2: Labels could be clearer—mentioning "Relative Interval Length" explicitly in the y-axis would help.
Section 5: Define "$$\delta$$" earlier to improve readability.
Questions For Authors: How does CoJER perform on classification problems?
What is the computational overhead compared to standard SCP?
Can the method be extended to structured prediction tasks?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and insightful comments. Please find our answers to the points raised below.
>However, one potential limitation is that the paper primarily evaluates performance on tabular regression datasets. The applicability of CoJER to classification problems or high-dimensional deep learning models is not explored in depth.
While our experiments focus on tabular regression datasets, we would like to emphasize that **CoJER is fundamentally agnostic to the specific predictive setting** as long as the transductive assumption holds (i.e., access to many test points). This stems from the fact that CoJER operates purely on conformal p-values inherited from the CP framework. As such, the method is applicable to classification tasks and other model families, including high-dimensional deep learning models, provided conformal $p$-values are available.
> No ablation studies to test different parameter choices.
The main parameter of CoJER is the aggregation function. We have performed an additional experiment to compare four possible choices: harmonic mean, arithmetic mean, geometric mean and quantile aggregation. In this setting, harmonic mean aggregation outperforms arithmetic mean, geometric mean and quantile aggregation consistently. Please see our answer to reviewer DTau for all experimental details.
> Monte Carlo estimation reliance—more discussion on computational trade-offs is needed. How does CoJER perform on classification problems?
In our setting, we see the reliance on Monte Carlo estimation as a strength rather than a weakness, since it allows JER control with arbitrary precision at a small computation cost. Sampling from $P_{n,m}$ is done in $O(n + m)$. Moreover, this is done once and for all for given values of $n$ and $m$.
> What is the computational overhead compared to standard SCP?
In the transductive setting considered in this paper with $m$ test points, both SCP and CoJER require $O(m n \log(n))$ to obtain $m$ conformal $p$-values (each of them requires sorting $n$ conformity scores). For FCP control with $B$ MC samples, CoJER additionally requires $O(B m \log(m))$ for template generation and sorting, and $O(B m (\log(m) + \log(B)))$ for calibration using binary search.
Therefore, neglecting the logarithmic terms for simplicity, the complexity of standard SCP is $O(m n)$ and that of CoJER is $O(m (n+B))$, where $B$ is the number of MC samples (which is user-defined does not depend on $n$ or $m$). In particular, the complexities are of the same order if $B$ is chosen to be of the same order as $n$.
> The paper provides a comprehensive review of prior work, but could benefit from including: Adaptive Conformal Inference: Angelopoulos et al. (2023) propose an adaptive method for controlling conformal prediction widths, which may be relevant for understanding how CoJER adapts to different datasets. Resampling-based Conformal Methods: Recent work on Jackknife+ (Barber et al., 2021) explores alternatives to split conformal prediction, which could be useful for comparison.
Both of these approaches provide marginal risk control, i.e. *for a single test point*. As argued above in our reply to reviewer 8Xty, such approaches could be leveraged to control the False Coverage Rate (FCR) but they do not provide FCP control in probability. Therefore, they are not more relevant than SCP as competitors for our method.
---
Rebuttal Comment 1.1:
Comment: I thank authors for addressing my concerns. I have raised my score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their prompt response and their decision to raise their score. However, unless we are mistaken, the score remains unchanged. We would be grateful if the reviewer could confirm whether the update was submitted in case there is a technical issue. | Summary: This paper examines the limitations of Split Conformal Prediction (SCP) in controlling the False Coverage Proportion (FCP) across multiple predictions. While SCP ensures control over the False Coverage Rate (FCR), it does not provide high-probability guarantees on the actual proportion of non-covered intervals across multiple test points. To address this gap, the authors propose CoJER, a method specifically designed for FCP control in multi-point prediction settings. By reformulating SCP as a p-value thresholding procedure, they derive conformal p-values within a Joint Error Rate (JER) control framework, avoiding strong parametric assumptions. This results in a more adaptive and sharper bound compared to the prior work of Gazin et al. (2024). Additionally, the authors extend their method to aggregate conformal prediction intervals across different models, enhancing robustness and reducing sensitivity to specific modeling choices. Their results demonstrate that CoJER effectively controls FCP under any aggregation scheme*, offering a principled approach to multi-point uncertainty quantification.
Claims And Evidence: The claim that "CoJER yields shorter intervals than the state-of-the-art method for FCP control and only slightly larger intervals than standard SCP." is empirically validated using benchmark datasets (OpenML).
However, the evaluation setup raises concerns. Since SCP does not ensure FCP control, it should not be included in interval length comparisons, as its intervals may be shorter at the expense of failing to meet the desired coverage guarantees. Consequently, the evaluation effectively compares CoJER against a single state-of-the-art method, limiting the strength of the claim. A more comprehensive comparison, potentially including additional baseline methods or alternative strategies for FCP control, would strengthen the evidence supporting this statement.
Methods And Evaluation Criteria: Using 17 OpenML datasets provides a solid foundation for evaluation. Additionally, reporting the relative interval length compared to the shortest interval across all methods, averaged over multiple splits per dataset, offers a clear basis for comparison. However, incorporating domain-specific real-world datasets from fields such as medicine or finance would have further strengthened the study, given the stated relevance of multi-point prediction in these areas.
Theoretical Claims: The proofs for Lemma 2, Proposition 2, and Proposition 3 appear sound.
Experimental Designs Or Analyses: I verified the validity of the experimental design. They used 17 OpenML datasets, performed 30 dataset splits, and tested five different regression models, providing a reasonable basis for confidence in the results. The choice of $ \alpha = \delta = 0.1 $ is appropriate based on the literature.
Supplementary Material: Yes. Appendix A and B.
Relation To Broader Scientific Literature: The paper lacks a dedicated related work section, which would help contextualize previous methods and enhance understanding. While the connection to SCP and the approach by Gazin et al. is well-established, a broader comparison with other risk-control methodologies should have been explored—particularly *conformal risk control* [1] and *risk-controlling prediction sets* [2]. Integrating these perspectives would provide a more comprehensive view of how CoJER fits within the broader landscape of uncertainty quantification and risk control.
[1] Angelopoulos, Anastasios N., Stephen Bates, Adam Fisch, Lihua Lei, and Tal Schuster. 2022. “Conformal Risk Control.”
[2] Bates, Stephen, Anastasios Angelopoulos, Lihua Lei, Jitendra Malik, and Michael I. Jordan. 2021. “Distribution-Free, Risk-Controlling Prediction Sets.”
Essential References Not Discussed: The only two works essential for understanding this paper are Gazin et al. [3] and the research on joint error rate [4], both of which are extensively discussed.
Other Strengths And Weaknesses: ### *Strengths:*
- Solid mathematical foundation with a well-structured flow of ideas.
- Strong empirical results that clearly demonstrate the superiority of the proposed method.
### *Weaknesses:*
- The title is too broad; *"Tight and reliable conformal prediction"* is a common goal in this field and does not clearly distinguish the contribution.
- Lacks sufficient intuitive explanations for key steps, with some missing motivations (e.g., the choice of the harmonic mean for the aggregation scheme).
- Limited comparison, as it considers only two methods, one of which does not ensure FCP control, restricting the strength of the empirical evaluation.
Other Comments Or Suggestions: The title of the paper is too broad, as achieving tight and reliable confidence intervals is a fundamental goal of any conformal prediction method. A more precise title should reference *CoJER* and its role in *FCP control* to better reflect the paper's specific contribution.
### *Questions:*
- *Line 62 (Page 2):* How do you define *"prediction intervals with a size close to the optimal length"*? What constitutes the optimal length in this context?
- What is the impact of $\delta $ on performance? Could setting $\delta $ to a very low value be beneficial?
### *Minor Suggestions:*
- Could you provide relevant examples of multiple test point settings?
- Most equations lack numbering. While this improves clarity, adding numbers would enhance readability and referencing.
- *Line 225:* The reference to *"Appendix B"* is not clickable.
- The legends in *Figures 1 and 2* would be more readable if placed above the plots.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and insightful comments. Please find our answers to the points raised below.
>However, the evaluation setup raises concerns. Since SCP does not ensure FCP control, it should not be included in interval length comparisons [...] including additional baseline methods or alternative strategies for FCP control, would strengthen the evidence supporting this statement.
To the best of our knowledge, the only existing approach that explicitly targets FCP control in CP is the work of Gazin et al. We would be glad to include additional baselines if others are brought to our attention, and we welcome any suggestions in that regard.
We agree that a fair comparison of interval lengths should ideally be restricted to methods that control the false coverage proportion (FCP). We chose to include SCP, despite it not controlling the FCP, to illustrate that CoJER provides substantially stronger coverage guarantees with only a modest increase in interval length.
> The paper lacks a dedicated related work section [...] other risk-control methodologies should have been explored—particularly conformal risk control [1] and risk-controlling prediction sets [2]."
For a new test point $(X_{n+1},Y_{n+1})$, SCP provides a confidence interval $C_{\alpha}\left(X_{n+1}\right)$ with the following guarantee:
$\mathbb{P}\left[Y_{n+1} \notin C_{\alpha}\left(X_{n+1}\right)\right] \leq \alpha$.
The two approaches mentionned by the referee extend SCP as follows:
- Conformal risk control replaces marginal miscoverage by a general loss $\ell$ , aiming for the guarantee: $\mathbb{E} \left[\ell(C(X_{n+1}, Y_{n+1})\right] \leq \alpha$
- The approach in [2] provide the same guarantees as SCP, in the case of set-valued prediction.
As such, these approaches still provide *marginal risk control for a single test point*. Our paper focuses on the transductive setting, where $m$ such test points are available. In this setting, while the marginal guarantees could be leveraged to control the False Coverage Rate (FCR), they do not provide FCP contol in probability. Therefore, they are not more relevant than SCP as competitors for our method. We have added text to the section 2.1 "Split conformal prediction" of the manuscript to clarify this important point.
> Line 62 (Page 2): How do you define "prediction intervals with a size close to the optimal length"? What constitutes the optimal length in this context?
In this context, we meant that CoJER produces prediction intervals with lengths close to those of SCP, while additionally providing formal FCP control—something SCP does not guarantee. We agree that the term "optimal" could be misleading and have reformulated this point in the updated version of the paper to avoid ambiguity.
>What is the impact of $\delta$ on performance? Could setting to a very low value be beneficial?
FCP is controlled with probability greater than $1 - \delta$. As $\delta$ becomes smaller, FCP control becomes increasingly stringent. In terms of performance, this means that decreasing $\delta$ increases interval width. Setting $\delta$ to a very low value (e.g. $\delta = 0.001$) would ultimately lead to a non-informative statement, with FCP control holding with overwhelming probability, for very wide intervals.
>The title is too broad; "Tight and reliable conformal prediction" is a common goal in this field and does not clearly distinguish the contribution.
We agree that the title is too vague and doesn't clearly distinguish the contribution. We propose to rename the paper **False Coverage Proportion control for Conformal Prediction**. To this end, we have sent a message to the PCs of the conference.
>Lacks sufficient intuitive explanations for key steps, with some missing motivations (e.g., the choice of the harmonic mean for the aggregation scheme).
Regarding the choice of the harmonic mean, we have performed an additional experiment to compare four possible choices: harmonic mean, arithmetic mean, geometric mean and quantile aggregation. Please see our answer to reviewer DTau for all experimental details.
>Limited comparison, as it considers only two methods, one of which does not ensure FCP control, restricting the strength of the empirical evaluation.
Please see our answer above: to the best of our knowledge, the work of Gazin et al. is the only existing approach that explicitly targets FCP control in CP. We would be glad to include additional baselines if others are brought to our attention, and we welcome any suggestions in that regard.
> Could you provide relevant examples of multiple test point settings?
A common example of multiple test point setting in CP are real-time systems with batches: In applications like fraud detection or content recommendation, models may process incoming data in batches for efficiency. Another common setup is the offline evaluation of predictive models: before deploying a model, it is often evaluated on a fixed test set.
---
Rebuttal Comment 1.1:
Comment: We thank the authors for their response. I have raised my score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their prompt response and for raising their score. We remain at the reviewer's disposal should they have any additional questions. | Summary: This paper addresses the challenge of controlling the False Coverage Proportion (FCP) in split conformal prediction (SCP). While SCP provides computationally efficient confidence intervals, it only guarantees marginal coverage over multiple test points. The authors highlight that in real-world scenarios, where multiple predictions are made simultaneously, the FCP of standard conformal prediction algorithms fluctuates significantly. This work proposes CoJER, a novel Joint Error Rate (JER) control-based method that achieves tight and reliable FCP control using a refined characterization of conformal p-values in a transductive setting. The authors also extend this procedure to provide FCP control under any pre-specified aggregation scheme for using knowledge from multiple prediction models simultaneously.
Claims And Evidence: The claims made in the paper are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The datasets, baselines, and evaluation metrics considered for the problem are appropriate.
Theoretical Claims: The proofs of the theoretical claims are accurate.
Experimental Designs Or Analyses: The experimental design and analysis is sound.
Supplementary Material: I have reviewed the supplementary material.
Relation To Broader Scientific Literature: This work provides a way to control the FCP over a given set of test points with high probability. It also extends the guarantees to aggregate the knowledge from multiple models to provide more efficient conformal intervals. While there has been previous work on both of these topics, this work offers a new perspective by using the p-value interpretation of conformal intervals to better use the rich literature on Joint Error Control using p-values. The algorithms mentioned in the paper improve the applicability of the conformal prediction framework to more real-world scenarios.
Essential References Not Discussed: The authors have provided a detailed overview of all relevant related works.
Other Strengths And Weaknesses: Strengths
- The paper addresses an important issue.
- The JER procedure and the aggregation procedure provide strong theoretical guarantees without any strong assumptions beyond the standard exchangeability assumption.
- The experimental evidence shows a clear advantage of using the proposed algorithms over the existing state-of-the-art.
Weaknesses
- The paper is very dense and hard to read. Some toy examples / walk-throughs of the procedure could be quite useful for understanding.
- The template-building procedure is not described well.
- The matrix dimensions are mentioned to be n x p, but p is never mentioned before. This is quite confusing. (It might be a typo making, and it should be n x m)
- The authors don't theoretically or experimentally explore the relative tightness of the finite sample FCP bounds depending on aggregation mechanism used.
Other Comments Or Suggestions: N/A
Questions For Authors: Please refer to the Weaknesses section above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and insightful comments. Please find our answers to the points raised below.
> The paper is very dense and hard to read. Some toy examples / walk-throughs of the procedure could be quite useful for understanding.The template-building procedure is not described well.
To improve the clarity of the paper, we will add a pedagogical figure that explains the procedure in two steps in the updated manuscript. The first panel illustrates the concept of JER control with an example of template and of calibrated threshold family. The second panel illustrates the intervals obtained with CoJER using the adjusted risk level $\widehat{\alpha} = t_{\lfloor \alpha m\rfloor}$ with $t$ the calibrated threshold family. We will also add pedagogical interpretations for the notions of threshold families and templates.
>The matrix dimensions are mentioned to be n x p, but p is never mentioned before. This is quite confusing. (It might be a typo making, and it should be n x m)
Thanks for pointing this out, this is indeed a typo -- the matrix dimensions are $n \times m$. We have corrected this in the updated version of the manuscript.
>The authors don't theoretically or experimentally explore the relative tightness of the finite sample FCP bounds depending on aggregation mechanism used.
To address the reviewer's concern, we performed an additional experiment on the 17 openML datasets used in the paper. In this experiment, we compare four possible aggregation schemes: harmonic mean, arithmetic mean, geometric mean and quantile aggregation [1].
We use the setup described in the main text - i.e. $\alpha = 0.1, \delta = 0.1$. Importantly, we first check that the FCP is controlled for all types of aggregation by reporting the FCP event non-coverage as described in the main text. We also compute the (relative) interval width for each aggregation scheme, averaged across 20 splits.
| | Harmonic mean | Geometric mean | Arithmetic mean | Quantile aggregation |
|-----------------------------------|---------------|----------------|-----------------|----------------------|
| FCP event coverage | 94% | 100% | 100% | 100% |
| Interval width increase (vs best) | **0%** | +24% | +230% | +54% |
Coherently with the theoretical guarantees obtained in Proposition 4, **the FCP is controlled for all four aggregation schemes.** In terms of interval tightness, harmonic mean aggregation outperforms arithmetic mean, geometric mean and quantile aggregation consistently.
### References
[1] Meinshausen, N., Meier, L., & Bühlmann, P. (2009). P-values for high-dimensional regression. Journal of the American Statistical Association, 104(488), 1671-1681. | null | null | null | null | null | null |
Mixture of Hidden-Dimensions: Not All Hidden-States’ Dimensions are Needed in Transformer | Accept (poster) | Summary: This paper presents a novel Transformer architecture to address the challenges associated with scaling hidden dimensions. The proposed MoHD (Mixture of Hidden Dimensions), leverages the observation of high hidden dimension sparsity to enhance computational efficiency and model performance. In experiments, MoHD outperforms vanilla Transformers and MoE models on efficiency and performance. Several techniques such as shared sub-dimension are introduced to dynamically activate sub-dimensions through a routing mechanism, ensuring that both common and specialized token are effectively modeled.
## update after rebuttal
The rebuttal addresses my questions well. I am pleased to keep my positive scores. I also found that all the reviewers have the positive rating, and therefore I believe we have reached a consensus on this submission.
Claims And Evidence: MoHD reportedly achieves a 1.7% performance gain with 50% fewer active parameters and a 3.7% improvement at 3× parameter scaling. These claims are empirically supported through rigorous testing. Results are convincing.
Methods And Evaluation Criteria: The MoHD architecture integrates shared/specialized sub-dimensions and dynamic routing, a well-grounded approach for intermediate dimensions. The proposed method aligns well with issues identified in observational studies. Evaluations in NLP tasks are convincing Parameter efficiency and task performance as metrics are practical choices.
Theoretical Claims: The method is built upon the observations. Experimental results demonstrate the effectiveness of the proposed method.
Experimental Designs Or Analyses: Experiments are comprehensive across many baselines to validate MoHD.
Supplementary Material: Terms, extended literature review and more observations are provided, which is help to understand the study -- but I did not check them carefully.
Relation To Broader Scientific Literature: The work is related to the design of efficient LLM architectures:
1. Sparsity among dimension, heads, activation, etc.
2. Conditional computation such as MoE.
Essential References Not Discussed: While key sparse and adaptive architecture studies are cited, recent MoE advancements requires further discussion (1).
1. R. Cai et al. Flextron: Many-in-One Flexible Large Language Model.
Other Strengths And Weaknesses: **Strengths**:
- The paper is mostly well-written and easy to follow.
- The proposed method is technically sound with well-visualized observations.
- Experiments are conducted with numerous model size. The ablation studies effectively validating components of MoHD.
**Weaknesses**:
- The experiments were conducted on models with up to 1B activated parameters. It could be helpful to show the sparsity patterns in larger models, which may be different.
- Exploring the combination of MoE and MoHD could help unlock sparsity benefits across multiple dimensions.
- Further ablation studies to isolate the effects of shared vs. specialized sub-dimensions and routing mechanisms would clarify why finer-grained routing harms performance while higher expert specialization improves outcomes.
- How sensitive is MoHD to routing variations (e.g., K-value selection, thresholds)? Was stability tested under different routing configurations?
- Can MoHD adapt to non-NLP tasks without architectural modifications? Or are design adjustments necessary?
Other Comments Or Suggestions: The paper can be improved by:
- Providing theoretical grounding for MoHD’s sparsity and routing mechanisms.
- Extending evaluations to other domains/larger models and real-world applications.
- Clarifying limitations/failure modes when deploying MoHD across tasks/architectures.
Questions For Authors: See Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your high evaluation of our work!
**Q1:** Providing theoretical grounding for MoHD’s sparsity and routing mechanisms.
**R1:** We refer to our response to Reviewer fdJw Q4, where we discuss how sparse mixed activation expands effective width, reduces complexity, and improves loss. Due to space constraints, we will include the full theoretical derivation in the final version.
**Q2:** Extending evaluations to other domains/larger models and real-world applications.
**R2:** As noted in h4Rw Q3, MoHD can scale to larger models, particularly those based on Transformer architectures. For cold-started models, MoHD is expected to apply similarly. We plan to extend evaluation to more domains in future work.
**Q3:** Why finer-grained routing harms performance while higher expert specialization improves outcomes.
**R3:** Our experiments show that performance peaks when hidden representations are divided into 16 sub-dimensions (each of size 256). Finer routing granularity leads to degradation due to:
- **Redundancy**: Over-partitioning causes sub-dimensions to capture overlapping or low-importance features, reducing capacity efficiency.
- **Routing instability**: Finer granularity makes routing decisions more sensitive and harder to stabilize during training.
- **Fusion mismatch**: Our Group Fusion Layer re-integrates sparsely activated sub-dimensions. A moderate number of active groups is easier to optimize, while excessive fragmentation hinders training.
As for **expert specialization**, we interpret it as each sub-dimension focusing on a narrower subspace or data distribution. This expands representational diversity and enables the model to better capture fine-grained patterns, improving generalization across tasks.
**Q4:** How sensitive is MoHD to routing variations and stability tested under different routing configurations?
**R4:** Thank you for the insightful question. We analyzed MoHD’s sensitivity and stability under varied routing configurations, with key findings as follows:
- **Routing stability**: As shown in *Figure 7*, sub-dimension selection probabilities mostly stay within 0.2–0.3, indicating active and balanced usage. Our **Sub-Dimension Load Balance Loss** further improves distribution and efficiency.
- **Shared subspaces enhance stability**: Best performance occurs when **75% of sub-dimensions are shared** across tokens. This mitigates instability in private subspaces. Beyond a certain partitioning level, additional sub-dimensions yield diminishing returns or harm performance due to routing instability.
- **Component sensitivity**: MoHD is more sensitive in **Attention** layers than **FFN** layers. Sparsification in Attention causes larger performance drops, suggesting that routing in this component requires finer, task-specific design—an area we aim to explore further.
We agree that routing stability is key to scaling MoHD and appreciate the reviewer’s focus on this aspect.
**Q5:** Can MoHD adapt to non-NLP tasks without architectural modifications?
**R5:** Thank you for the thoughtful question. While MoHD is developed for NLP, its core ideas may extend to other domains—particularly Vision-Language Models (VLMs), which also use Transformer architectures. Our perspective:
- **Shared structure**: Vision Transformers (ViTs) treat image patches as tokens, analogous to text tokens. These patches share global patterns while retaining local uniqueness, similar to linguistic semantics.
- **Core applicability**: If ViT hidden dimensions show both shared and token-specific activations, MoHD’s principle of selective sub-dimension activation may enhance efficiency and expressiveness.
- **Architectural adaptation**: Despite structural parallels, visual representations differ from language. Routing and partitioning strategies would need tuning to align with visual characteristics.
- **Multimodal fusion**: In VLMs, MoHD-induced sparsity could affect alignment across modalities. Incorporating sparsity-aware mechanisms without harming cross-modal interaction remains an open direction.
In short, while direct transfer isn’t trivial, MoHD’s principles are generalizable and worth exploring in vision and multimodal settings.
**Q6:** Extend Reference
**R6:** We sincerely thank the reviewer for the additional references. We will include a detailed discussion of them in the next revision. | Summary: - The proposed Mixed Hidden Dimensions (MOHD) aims to address the inefficiency of hidden dimension scaling.
- The core insight lies in the observation that only subsets of dimensions are activated across tokens, with some dimensions shared globally and others allocated as "private" dimensions.
- The MOHD model reportedly achieves comparable or superior performance to standard Transformers while reducing activation parameters by up to 50%.
Claims And Evidence: - Experiments supports the claims for smaller models (e.g., 355M/495M parameters).
- Results are largely convincing, though extrapolation to larger models (e.g., 1.13B) raises questions.
Methods And Evaluation Criteria: - MOHD is evaluated across diverse tasks.
Theoretical Claims: The theoretical foundation requires further elaboration.
- While the shared/specialized sub-dimension concept is appealing, rigorous analysis of their interactions is lacking.
- The activation scaling mechanism, though practical, lacks theoretical justification for mitigating information loss. Deeper mathematical insights would ground the method more firmly.
Experimental Designs Or Analyses: - Ablations offer useful insights but could better explain design choices (e.g., group fusion, balance loss) and their specific contributions.
- Claims of MOHD superiority over MoE feel underdeveloped. A nuanced comparison of trade-offs (e.g., model scale, task complexity) would provide a more balanced perspective.
Supplementary Material: Yes
Relation To Broader Scientific Literature: - MOHD contributes to LLM Architecture, and Efficient-LLM
Essential References Not Discussed: Existing works also observe the sparsity of Transformers [1,2], and this paper is also related to the pruning methods of LLMs [3,4], which are recommended for citation and discussion.
[1] Li et al. The Lazy Neuron Phenomenon: On Emergence of Activation Sparsity in Transformers. ICLR 2023.
[2] Wang et al. Q-Sparse: All Large Language Models can be Fully Sparsely-Activated. CoRR abs/2407.10969 (2024).
[3] Ma et al. LLM-Pruner: On the Structural Pruning of Large Language Models. NeurIPS 2023.
[4] Dong et al. Pruner-Zero: Evolving Symbolic Pruning Metric From Scratch for Large Language Models. ICML 2024.
Other Strengths And Weaknesses: Pros:
- The writing and organization are relatively clear.
- A unique solution to inefficient hidden dimension scaling in Transformers.
- Convincing results for smaller models.
- The shared/specialized sub-dimension concept is novel and promising.
Cons:
- Insufficient theoretical grounding.
- Limited comparisons with other sparse/MoE architectures.
- Computational costs for scaling MOHD need clearer articulation.
Other Comments Or Suggestions: This paper would benefit from deeper analysis of limitations.
Questions For Authors: - How does MOHD compare to other sparse models (e.g., structured/activation pruning) in training efficiency and convergence?
- What is the theoretical basis for activation scaling? Why does it preserve activation flow despite reduced compute?
- How does MoHD improve performance while reducing activations?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are greatly encouraged by the reviewer’s positive feedback. Below, we address each of your comments in detail.
**Q1: Theoretical Grounding**
**R1:** Please see our response to **Reviewer fdJw (Q4)** for a more detailed explanation. In summary, we theoretically ground MoHD by showing that:
1. Sparse activation leads to **effective width expansion**;
2. This expanded width results in **lower empirical loss**;
3. A **hybrid scheme** of private and shared sub-dimensions yields better performance than either alone.
**Q2: Computational Costs Compared to MoE and Other Methods**
**R2:** We summarize MoHD’s computational characteristics relative to other approaches:
- **Compared to MoE**: MoHD sparsifies the hidden dimension across **all matrices**, while MoE targets only the FFN. Thus, MoHD achieves higher sparsity coverage. It does, however, introduce modest overhead from additional WTE parameters and routing operations, particularly at inference.
- **Compared to pruning and quantization**: MoHD avoids iterative pruning or post-training fine-tuning. It learns sparse activations **during training**, reducing overall training costs while maintaining performance.
- **Compared to activation sparsification**: Unlike post-hoc sparsification, which may degrade performance due to training-inference mismatch, MoHD maintains consistency between phases. This leads to **better generalization and stability** at a similar inference cost.
**Q3: Scaling Costs**
**R3:** Please refer to **Reviewer h4Rw, Q4** for detailed cost analysis across model scales. In brief:
- **Routing and fusion layers** add minimal parameter overhead.
- The primary computational increase comes from the **WTE layer**, which scales with model width.
- Notably, the performance benefits of MoHD **cannot be solely attributed to increased WTE size**, as demonstrated in our ablation and FLOPS analysis.
**Q4: How MoHD Improves Performance While Reducing Activations**
**R4:**
1. **Sparsity and Redundancy in Hidden Dimensions**
Empirical studies reveal that only a small subset of hidden dimensions are meaningfully activated. MoHD removes redundant activations, concentrating computation on **informative subspaces** and improving efficiency.
2. **Component-wise Sensitivity to Sparsification**
Consistent with prior work, our experiments show that the FFN layers tolerate, and sometimes benefit from, sparsification due to regularization effects. In contrast, MHA layers are more sensitive. MoHD accommodates this by applying **structured sparsity selectively**, preserving performance.
3. **Grouped Fusion Mechanism**
MoHD achieves efficiency via:
- **Activation scaling**: Dynamically adjusts magnitudes of active units for representational stability.
- **Grouped fusion**: Aggregates sparse activations to preserve expressive capacity.
These mechanisms enable MoHD to reduce activations without compromising—and often **improving**—performance. Our findings suggest that **well-structured sparsity** improves generalization by filtering noise and reinforcing salient patterns.
**Q5: Limitations of MoHD**
**R5:** We appreciate the chance to elaborate on current limitations:
- **Hyperparameter Sensitivity**
MoHD relies on key hyperparameters: sparsity ratio $\delta$, shared dimension proportion $\phi$, and total number of sub-dimensions $N$. Tuning the balance between **shared and specialized** sub-dimensions is especially important for optimizing generalization vs. specialization.
- **Routing Optimization Challenges**
Unlike MoE’s routing, MoHD activates sub-dimensions **within a single matrix**, making routing more complex. While effective, our current routing loss becomes less stable as the number of sub-dimensions increases, **limiting scalability**.
- **WTE Layer Growth**
As MoHD expands the effective width, the **WTE layer scales proportionally**, contributing to a non-negligible share of total parameters. This can offset efficiency gains at very large scales.
- **Information Degradation at Scale**
Despite using activation scaling and group fusion, large-scale downsampling and softmax weighting may still cause **skewed distributions**, suppressing useful but low-weighted sub-dimensions. This can reduce representational fidelity, especially under extreme sparsity.
**Q6: Extend Reference**
**R6:** We sincerely thank the reviewer for the additional references. We will include a detailed discussion of them in the next revision. | Summary: This paper proposes a LLM sparsification method, namely Mixture of Hidden Dimension (MOHD), to improve the efficiency of Transformer-based LLMs. Based on the observation that only a small subset of hidden dimensions is shared and activated across tokens in given texts, MOHD selectively discern shared and specific sub-dimensions. In this way, not all hidden dimensions are utilized, and thereby improves the parameter efficiency, and utmost retain competitive performance with original LLMs. Experimental results demonstrate the effectiveness of the proposed method.
Claims And Evidence: The experiments are basically sufficient. The model is established based on the observation, which provides empirical evidence for the motivation. To achieve the sparsification, the paper further proposes a routing mechanism. Experimental results indicate the effectiveness of the proposed method.
Methods And Evaluation Criteria: The paper conduct experiments on benchmark datasets and tasks on the widely-used LLaMA architecture. The method evaluation is convincible to me.
Theoretical Claims: The model is established based on the observation that not all hidden dimensions are utilized in LLMs for tasks. Though the underlying designations of the specific modules can be further illustrated, most parts are reasonable. Experimental results evaluate the method designs.
Experimental Designs Or Analyses: The paper conducts comparison experiments, ablation studies, parameter analysis. The appendix also provides details of experimental design and analysis.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: The proposed method is related to general efficiency studies on LLMs. The research topic is important, and this paper may draw some interests in the community.
Essential References Not Discussed: Most are cited. More discussion and connection to existing MoE studies may be helpful in understanding the further contribution between them.
Other Strengths And Weaknesses: Strengths:
1. The motivation is mostly clear. The preliminary experiments and observations are inspirable.
2. The proposed method is somewhat interesting and mostly reasonable.
3. The experiments demonstrate the effectiveness of the proposed method.
Weaknesses:
1. Some details of the specific module are not clear. The routing mechanism and activation flow can be further illustrated. The specific motivation of modules can be enhanced.
2. The computational costs at different scales can be further clarified.
3. It is not clear whether the proposed method can be applicable to larger LLMs.
4. The writing can be further improved. The clarification of symbols can be detailed for better understanding.
Other Comments Or Suggestions: The paper is well-written, but technical explanations of the routing mechanism would benefit from further clarification. A more explicit comparison with MoE and other sparsification methods in terms of scalability would also enhance the work.
Questions For Authors: What are the computational costs of training and inference with MOHD compared to MoE and other sparsification methods?
Ethical Review Concerns: No.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the valuable comments and suggestions.
**Q1: Comparison with MoE and other sparsification methods**
**R1:** We respectfully refer to our responses to Reviewer fdJw (Q2, Q3) for a detailed comparison between MoHD and MoE, especially regarding routing design and efficiency. Comparisons with other sparsity-based methods are addressed in Reviewer 3ShX, Q1.
**Q2: The illustration and motivation of routing mechanism and activation flow**
**R2:** Thank you and we will further improve the writing. A detailed explanation is provided below:
- The motivation for introducing **Sub-dimension Scaling** comes from our analysis of activation patterns in Transformers (Section 2.2, Figure 10). We observed that after sparse activation, activation magnitudes tend to decrease, potentially causing **information loss**. To address this, we introduced a scaling factor \( \alpha \), which adjusts the outputs of activated sub-dimensions to restore the original magnitude. This factor is dynamically computed based on the number of activated sub-dimensions, helping to **preserve consistent activation flow**.
- Our **Dynamic Routing** mechanism is motivated by the observation that many hidden dimensions have consistently low activation values, indicating **redundancy**. To reduce this, we designed a routing method that dynamically selects sub-dimensions based on input token characteristics, allowing the model to adaptively adjust its activations and enhance representational capacity. We also observed **shared activation patterns** across tokens and introduced **shared sub-dimensions** to capture these common features, reducing the complexity of routing and improving training efficiency. Experiments confirm that this approach yields **notable improvements**.
**Q3: MoHD's applicability to larger LLMs**
**R3:** Thank you for the suggestion. We believe applying MoHD to larger-scale language models offers several potential benefits:
* **Improved parameter efficiency**: Larger models typically suffer from higher parameter redundancy. This makes MoHD’s sparse activation mechanism more effective, as it helps reduce inefficiencies without sacrificing expressive power.
* **Accelerated performance gains with scale**: As shown in **Table 1**, under the same proportion of activated parameters, the performance improvement of MoHD over the baseline tends to grow as the total parameter count increases. This suggests that the advantages of MoHD may scale positively with model size.
* **Enhanced representation and generalization capacity**: By combining shared and specialized sub-dimensions, MoHD captures both general features and fine-grained, token-specific patterns. Scaling up the model increases the size of each sub-dimension, potentially improving its ability to model complex language phenomena and enhancing generalization across tasks.
Due to the significant time and computational costs associated with pretraining larger LLMs, we were unable to include full-scale experiments in the current version. However, we plan to include these extended results in a future revision to address your suggestion more thoroughly.
**Q4: The computational costs at different scales can be further clarified**
**R4:** Thank you for this valuable suggestion. To provide a clearer view of the computational overhead, the following table presents the theoretical forward-pass FLOPS for models equipped with Word Token Embeddings (WTE), routing, and group fusion layers under different configurations:
|Model Size|MoHD 50%|MoHD 75%|Baseline 100%|2× Width|3× Width|4× Width|
|-|-|-|-|-|-|-|
|355M|2.70E+12|3.63E+12|4.56E+12|5.40E+12|6.24E+12|7.07E+12|
|495M|4.19E+12|5.59E+12|7.00E+12|8.40E+12|9.73E+12|1.11E+13|
|1.13B|6.93E+12|9.80E+12|1.26E+13|1.40E+13|1.55E+13|1.69E+13|
As shown above, although increasing model width leads to a proportional increase in WTE-related FLOPS, the relative impact on total FLOPS remains modest. This behavior is partly attributed to the architecture design: deeper models with a higher depth-to-width ratio benefit more from MoHD, as the ratio of activated parameters to total parameters becomes more efficient during scaling.
It’s also important to note that the performance gains of MoHD are **not merely a result of increased WTE size or parameter count**. For instance:
* The FLOPS of the **x4-width model** is significantly higher than that of **x3**, yet the performance improvement is marginal.
* The **1.13B model** benefits more from MoHD in terms of performance improvement compared to the 495M model, even though its **FLOPS-to-baseline ratio is actually lower**.
These observations reinforce our argument that MoHD achieves efficiency primarily through **structural sparsity and expert specialization**, not brute-force scaling.
**Q5:** The clarification of symbols.
**R5:** We apologize for the confusion and will provide clearer and more readable content in the next revision. | Summary: This paper proposes MoHD (Mixture of Hidden Dimensions), an architecture that optimizes hidden dimension usage via dynamic routing between shared and token-specific dimensions. Experiments demonstrate its superior parameter efficiency and task performance over existing models.
Claims And Evidence: The paper provides empirical evidence to support claims.
The authors empirically show sparse activation patterns in hidden dimensions across tokens: some shared, and others token-specific. Extensive experiments validate the performance gains with low computational overhead. Ablation studies confirm component efficacy.
Methods And Evaluation Criteria: I think the method design is mostly reasonable. Though hidden dimension sparsity is known, cross-token activation modeling is novel and justifies shared sub-dimensions. Evaluations use standard NLP datasets and metrics, aligning with community norms.
Theoretical Claims: I think the theoretical contributions are mostly based on the observations of activation patterns and routing strategies. While the motivations are intuitive, further analyses can be enhanced.
Experimental Designs Or Analyses: I think the experiments are mostly convincing.
Supplementary Material: NA
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: NA. Most related works have been cited.
Other Strengths And Weaknesses: I think this paper has strengths as:
1. Novel Approach.
2. Clear writing.
3. Robust generalization across NLP tasks
4. Persuasive ablation studies.
For the shortcomings of this paper, please refer to Questions.
Other Comments Or Suggestions: There are some terminology and symbols requiring more clarification:
1. Terminology clarification: Does *dimension* refer to embedding size, specific dimensions, or all dimensions?
2. Symbols clarification, e.g., $\otimes$ in Equation 3.
Questions For Authors: 2. Why does the 495M MoHD underperform on WinoGrande (WG)?
3. Is it possible to apply the existing MoE model and Dense model to the MoHD architecture?
4. Could you describe in more detail the differences between the MoHD and MoE approaches in terms of routing optimization, performance efficiency, etc.? Can these two approaches be combined?
5. Can you provide some theoretical basis for MoHD?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the valuable comments and suggestions.
**Q1: On MoHD-495M’s performance on WinoGrande (WG)**
**R1:** We respectfully clarify that MoHD-495M does **not consistently underperform** on WG. In fact, our MoHD 50%-495M model achieves **52.7%**, outperforming the **LLaMA2-495M baseline (51.3%)**. At larger scales (e.g., 1.13B), MoHD shows **clear gains** under multiple configurations (75% x2, x3, x4), indicating strong adaptability.
That said, performance dips in certain settings may stem from:
- **WG-specific reasoning demands**: WG requires fine-grained commonsense inference, which may benefit from further adaptation of MoHD’s routing and activation strategies.
- **Routing sensitivity**: Some configurations may have suboptimal routing for WG due to overfitting to other tasks. Ensuring robustness across specialized reasoning datasets like WG is a valuable direction for future work.
**Q2: On integrating MoE into MoHD**
**R2:** We appreciate this suggestion. **Yes, integration is theoretically feasible**. MoHD introduces sparsity along the **hidden dimension (width)**, while MoE sparsifies the **intermediate dimension (length)**. These operate orthogonally and, in principle, could be combined for **multi-dimensional sparsity**, potentially improving parameter efficiency.
However, such integration would introduce **significant optimization and engineering complexity**. Further empirical studies are needed to assess whether their combination yields **synergistic or conflicting effects**.
**Q3: MoHD vs. MoE**
**R3:** Key differences:
- **Routing granularity**: MoHD routes at the **hidden dimension** level, tailoring token-specific subspace activations. MoE routes at the **expert** (subnetwork) level in the FFN.
- **Component scope**: MoHD applies to **both Attention and FFN**, while MoE is typically confined to FFN projections.
- **Capacity and interpretability**: MoHD expands **model width**, increasing per-token representation capacity. MoE expands the **FFN depth**, aiding memory but not width.
- **Efficiency**: MoHD reduces redundant activations in **both Attention and FFN**, offering **better scaling under fixed activation budgets** (see Table 2).
- **Challenges**: MoHD’s routing across hidden dimensions is **less explored** and demands **novel optimization and implementation**.
Despite these challenges, MoHD shows **notable improvements** over MoE when trained from scratch, highlighting its **promise for efficient scaling**.
**Q4: Theoretical Proof**
**R4:** We prove that mixed sparse activation achieves strictly better risk bounds.
**Lemma 1 (Unbiased Sparse Forward Pass).**
Let $h \in \mathbb{R}^n$ be a hidden layer. Apply a mask $r \in \{0,1\}^n$ with $\mathbb{P}(r_j=1) = p$, and define the sparsely activated output as $\hat{h} = \frac{1}{p}(r \odot h)$. Then:
$$ \mathbb{E}[\hat{h}] = h; \quad \mathbb{E}[||\hat{h}||^2] = ||h||^2/p + (1-1/p) \sum_{j=1}^n h_j^2 $$
*Proof:* Linearity of expectation and variance decomposition.
**Corollary 1.1 (Effective Width).**
For $p = k/n$, training with sparse activation is equivalent (in expectation) to training a full network with width $n' = \frac{n}{k} \cdot \mathbb{E}[\|h\|_2^2] / \|h\|_2^2 \geq n$. Thus, sparse activation expands effective width.
**Lemma 2 (Approximation Error Decay with Width [Barron, 1993]).**
Let $f^* \in \mathcal{F}$, where $\mathcal{F}$ is a Barron space. For a network with width $n$, the approximation error satisfies:
$$
\epsilon(n) \leq \frac{C_f}{\sqrt{n}},
$$
where $C_f$ is the Barron norm of $f^*$.
**Corollary 2.1 (Mixed Activation Lowers Error).**
Define $s$ encodes global context, $g$ decodes shared features, $u$ models per-token contributions; $\Theta$ denotes all parameters.
If $f^* = g(s(x)) + \sum_i u(x_i)$ with non-constant $g$, then:
- Token-only networks ($n = n_u$) have $\epsilon_B \geq \frac{C_g}{\sqrt{n_u}}$.
- Mixed networks ($n = n_s + n_u$) have $\epsilon_A \leq \frac{C_g}{\sqrt{n_s}} + \frac{C_u}{\sqrt{n_u}}$.
Choosing $n_s = \Theta(n_u)$ yields $\epsilon_A < \epsilon_B - \delta$ for $\delta = \frac{C_g}{2\sqrt{n_s}}$.
**Lemma 3 (Rademacher Complexity of Shared Dimensions).**
Let $\mathcal{H}_A$ (mixed) and $\mathcal{H}_B$ (token-only) have equal total parameters. Then:
$$
\text{Rad}(\mathcal{H}_A) \leq \text{Rad}(\mathcal{H}_B) - \Delta,
$$
where $\Delta = \Omega\left(\sqrt{\frac{n_s}{m}}\right)$ for $m$ samples.
*Proof:* Shared Dimensions reducing the VC dimension [Bartlett, 1998]. Apply Dudley entropy integral.
**Theorem (Risk Bound).**
For risk $R(f) = \epsilon(f) + C\cdot\text{Rad}(\mathcal{H})$:
$$ R_A \leq \epsilon_A^{\text{(Lower error)}} + C\cdot\left(\text{Rad}(\mathcal{H}_B) - \Delta\right)^{\text{(Lower complexity)}} < R_B $$
Sparse activation (i) expands effective width, which (ii) lowers approximation error when shared dimensions are included, while (iii) shared dimensions reduce Rademacher complexity - collectively proving $R_A < R_B$. | null | null | null | null | null | null |
Hessian Geometry of Latent Space in Generative Models | Accept (poster) | Summary: ## Update After Rebuttal
## I maintained my score. Please see my response to the reviewer below for my reasons.
----
This work presents a novel technique for analyzing latent space geometry in diffusion models. Based on the reconstruction of the Fisher Information metric, this method approximates the posterior distribution of latent variables given synthetic samples generated from a diffusion model, and uses this information to learn the log-partition function of the variable $t$. To develop this method, the work relies heavily on the theoretical and mathematical works done in
[Amari and Armstrong (2013)](http://arxiv.org/pdf/1312.1103) and [Bryant (2024)](https://arxiv.org/abs/2405.06998) for its own derivations and assumptions.
When applied to diffusion models, this new method highlights structures of phase transition in the latent space, parameterized by interpolation values $\alpha$ and $\beta$ which represent $t$, allowing the authors to find geodesics or shortest possible paths between latent variables --- highlighting the complex behavior of the generation process in diffusion models.
Claims And Evidence: ## Claim
(1) For two-parameter systems, like the classic Ising model and TASEP system, the work introduces metric which reconstructs Fisher metric outperforms existing baselines designed for reconstructing thermodynamics quantities.
(2) The introduced method when applied in diffusion models demonstrate fractal structure of phase transitions in latent space, illustrated by abrupt changes in the reconstructed Fisher metric.
(3) The interpolation based on geodesics' information illustrates a smoother interpolation in contrast to the conventional linear interpolation. Moreover, the authors also claim that the diffusion model exhibits a divergent Lipschitz constant with respect to the latent space at phase boundaries, using geodesics' information as part of their analysis.
## Evidence
(1) To support the first claim, the work first show that the posterior $p(t | x)$ satisfies
$\underset{N \rightarrow \infty}{\text{lim}} \big ( p(t | x_1, \dots x_N) \big )^{1 / N} \overset{a.s.}{=} \mathrm{e}^{-D_{\log Z(t)} (t, t')}$ where $D_{\log Z(t)} (t, t') = \log Z(t) - \log Z(t') - \langle \nabla_t' \log Z(t'), \, t - t' \rangle$ is the Bregman Divergence. This full proof is detailed in Appendix A. There are some assumptions made, but the main assumption is that the general distribution belongs to the exponential family.
Then, with the posterior $p(t | x_1, \dots x_N)$, the log-partition function $log Z(t)$ can be estimated with the trained parameter $\theta$ using the following loss
$
\mathcal{L}(\theta) = \int_S \mathrm{D}_\mathrm{JS} \big ( p(t | x_1, \dots x_N) \,\, || \,\,\, p_{\log Z_\theta} (t | t') \big ) dt'
$
Using $\mathcal{L}$, they computed the free energy of the Ising model and TASEP respectively. They recorded the free energy RMSE (root mean squared error) in Table (1), as well as the RMSE of the partial derivatives w.r.t the function $F$ or the Hamiltonian (total energy). Based on Table (1), it's clear that the work's method performs better since it has much lower RMSE than its baselines.
The accuracy of the computed free energy is verified with the depiction of the actual free energy of the two simple models, see figures (3 and 4).
(2) To tackle the second claim, the authors rely on the usage of a feature extractor $\mathcal{E}$ (in this case CLIP) to approximate the posterior $p(t | x_1, \dots x_N) \approx \mathrm{e}^{- \frac{N}{2} \lVert \mathcal{E}(x) - \mathcal{E}(x') \rVert^2}$, where $x \sim p(x|t)$ and $x' \sim p(x | t')$, for the high-dimensional (or image) setting.
Using the loss function $\mathcal{L}$ (mentioned above), the authors trained an MLP which represents $g_F (t) = \nabla^2 \log Z_{\theta^*} (t)$ obtained via $\theta^* = \text{argmin}_\theta \mathcal{L} (\theta)$. Using this network, they are able to observe the phase transitions detailed in figure (7). Keep in mind, they represent t as two interpolation parameters, $\alpha$ and $\beta$.
(3) With the fisher metric $g_F(t) = \nabla^2 \log Z_{\theta^*} (t)$, the method is extrapolated to explore the geodesics' geometry, or shortest path geometry. To do this, the authors rely on the work of [Shao et al. (2018)](https://openaccess.thecvf.com/content_cvpr_2018_workshops/papers/w10/Shao_The_Riemannian_Geometry_CVPR_2018_paper.pdf) for its approach to find the geodesic along a perturbation/interpolation trajectory of two images, by minimizing the curve length $L[\gamma(t), g_f(t)]$ (see Eq. 25).
Although the results are mostly qualitative, we can see that in figures (6 and 7) as well as those in the Appendix, figures (11 and 12) --- that the geodesics' interpolation obtained via minimizing $L$ is much smoother.
## Strength
(1) This is a very interesting paper which presents a method on obtaining information from the (intractable) partition function of diffusion model. Through some formulations, the partition function (based on t), which the work uses to analyze the behavior of diffusion model, is shown to be very informative. Moreover, the results are very interesting.
(2) The formulations/derivations are rigorous and based on or inspired heavily by works done by [Amari and Armstrong (2013)](http://arxiv.org/pdf/1312.1103) and [Bryant (2024)](https://arxiv.org/abs/2405.06998). Hence, I think the paper has a nice story in which it follows very well. The presentation is good.
## Weakness
(1) With the exception of Table (1), in general the results do feel more qualitative than it is quantitative. Also, the results in Table (1) indicate that for the two simple models. It would be nice to have something similar for comparing geosidics and linear interpolation. Perhaps, something like Table (2) of [Shao et al. (2018)](https://openaccess.thecvf.com/content_cvpr_2018_workshops/papers/w10/Shao_The_Riemannian_Geometry_CVPR_2018_paper.pdf).
(2) For the diffusion model analyses, the experiment was conducted only on a stable diffusion model. It would be nice to see this behavior for non-stable diffusion models as well.
Methods And Evaluation Criteria: In general, I believe the evaluation makes sense. Unfortunately, there are no baselines for the proposed method in high dimensional, except for the 2D model case (since you can compute the actual free energy for Ising and TASEP). It is rather difficult to have actual baselines (to my knowledge).
Theoretical Claims: The paper definitely makes a lot of theoretical claims. Thus, relying heavily on works done by others, the authors were able to rigorously formulate their approach and also proofs behind it. See Appendix A for the most important proof.
Experimental Designs Or Analyses: Yes, I did. Please see what I've said above. Nonetheless, I do not see problems with their experimentation, but rather I think their mathematical works can be written and organized slightly better.
Supplementary Material: I checked the entirety of the Appendix since it contains their proofs and experimental details. I paid especially close to Appendix B to understand their experimentation. However, I must confess I did skim over lemma A.2 to A.4. I felt that they were not important to what I wanted to check and understand.
I really enjoyed reading the proof of Lemma A.6 and of course, Thm A.1.
Relation To Broader Scientific Literature: I think this work is pretty important. We know that diffusion models are just energy-based models (EBMs), which learns the score or the gradient of the energy function (or log-p). Unlike normal EBMs, to compute the log-likelihood or an estimate to the energy, we have to perform a numerical integration of the probability flow ODE, detailed in [Song et al. (2021)](https://arxiv.org/abs/2011.13456). This is simply a way of telling how good a diffusion model learns. But it doesn't fully tell us how diffusion models behave.
This work presents an interesting approach and a point of view of how we can interpret the intractable partition function, which we avoided in training diffusion models, such that we can analyze the behavior of diffusion models.
Essential References Not Discussed: Since you mentioned Ising model, I think it's also fair to cite Hopfield models in the work as well for relevancy. See [Amari, s-i. (1972)](https://ieeexplore.ieee.org/document/1672070) and [Hopfield J. (1982)](https://www.pnas.org/doi/10.1073/pnas.79.8.2554)
Another interesting work which relies on the analysis of the Jacobian of the score or Hessian of the energy is [Achilli et al. (2024)](https://arxiv.org/pdf/2410.08727?).
Other Strengths And Weaknesses: See Claim and Evidence Section
Other Comments Or Suggestions: Overall, I believe this is a good work, but it needs an additional quantitative result. For example, something like Table (2) of [Shao et al. (2018)](https://openaccess.thecvf.com/content_cvpr_2018_workshops/papers/w10/Shao_The_Riemannian_Geometry_CVPR_2018_paper.pdf) is probably sufficient.
Moreover, there are minor notation errors that should be correct --- and the semantics of some variables (especially those in the Appendix) should be included to make it clear to the readers.
In the main text:
(1) Eq. (22) is labeled as $\mathcal{L}_1$ but shouldn't it just be $\mathcal{L}$ instead? I don't see this subscript 1 being used at all.
(2) In section 4.1, on line 304-305, you should use $\partial$ instead of $\mathrm{d}$ to indicate partial derivative.
In the Appendix:
(1) On line 617, it should be "where $\psi(s, x)$ ..." instead of "where $\psi(x, s)$ ..." just for consistency.
(2) What exactly does the variable $\mathbf{N}$ mean on lines 720 and 753? Do you mean to use $\mathbb{N}$ instead?
Finally, here is my ***main suggestion***. I think you should write an algorithm section detailing how you train your model $g_F(\theta, t)$. It should include the usage of the feature/encoder model. Furthermore, please include the details of the algorithm found in [Shao et al. (2018)](https://openaccess.thecvf.com/content_cvpr_2018_workshops/papers/w10/Shao_The_Riemannian_Geometry_CVPR_2018_paper.pdf) which you relied on, in the Appendix. It will help keep things clear for the readers.
Questions For Authors: See Above Section
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We highly appreciate provided feedback and agree that the paper will benefit from additional quantitative studies.
- Providing additional quantitative result, like Table (2) of Shao et al. (2018)
Please refer to the General Response below.
- Adding experiments for non-stable diffusion generative models
In addition to stable diffusion, we have applied the proposed approach to StyleGAN3 (2021). However, it resulted in geodesics close to linear interpolation and overall our findings coincide with Wang & Ponce (2021), so these experiments are not included in the main paper text. Please find Figure 9: "Free energy surface of a StyleGAN v3 reconstructed using CLIP distance" in supplementary material.
- Writing an algorithm section detailing how the model is trained, including the details of the algorithm found in Shao et al. (2018)
We agree that these algorithmic details should be written explicitly.
# The General Response
We would like to thank the reviewers for recognizing our method to approximate Fisher information metric as novel and theoretically grounded.
Since most reviews raise the question of comparing our work with Wang & Ponce (2021) and Shao et al. (2018), we would like to provide a detailed comment on the distinction between our proposed method and these prior works.
Shao et al. studies VAE and computes geodesic curves based on pullback metric induced by euclidean metric in pixel space. Wang et al. consider GAN models and defines metric on the latent space via pullback of L2 distance in the feature space of VGG-19 (LPIPS distance). In both cases generative models are deterministic: each latent Z produces only a single image X.
The key distinction of our work lies in its consideration of a broader class of models, specifically, models with stochastic generation, where a single latent Z corresponds to a distribution in the image space p(X|Z). Diffusion models with stochastic sampling (and all statistical physics models) cannot be addressed within the formalism of Shao et al. (2018) or Wang et al. (2021).
To fulfill the request of the reviewers we compare our algorithm with Shao et al. (2018) and Wang et al. (2021) in deterministic sampling regime. We get pullback metric from the euclidean metric in the space of CLIP embeddings. The Jacobian is estimated via finite differences. We use three evaluation scores: CLIP, pixel and Perceptual Path Length (PPL), which measure the average path length in CLIP embedding space, pixel space and feature space of VGG-19. We compute them as the cumulative L2 distance between consecutive feature vectors (or images for pixel space), then average this perceptual divergence measure across multiple trajectories.
Average CLIP Lengths ± std (bootstrap):
--------------------------------------------------
Geodesic ours: 72.2951 ± 3.7516
Linear: 73.6076 ± 3.5428
Geodesic: 73.5764 ± 4.3664
Average Pixel Lengths ± std (bootstrap):
--------------------------------------------------
Geodesic ours: 2769146.3818 ± 23813.2801
Linear: 2757206.0770 ± 27731.2199
Geodesic: 2739432.4217 ± 35251.5666
Average Perceptual Path Lengths ± std (bootstrap):
--------------------------------------------------
Geodesic ours: 3.1172 ± 0.1587
Geodesic: 3.1857 ± 0.2134
Linear: 3.1725 ± 0.2256
We observe that our interpolation is on par with Shao et al. (2018) and Wang et al. (2021) in the case of deterministic sampling. We additionally compute the curvature of each trajectory as the mean angular change per unit length between consecutive path segments. Our analysis reveals that trajectories constructed using the Wang metric show significantly higher curvature and frequent turning compared to our method. We attribute this behavior to the finite differences, which introduce high frequency noise in metric components. In the case of diffusion, the Jacobian is hard to obtain via backpropagation as suggested in Shao et al. (2018) due to high computational cost.
Average Mean Curvature ± std (bootstrap):
--------------------------------------------------
Geodesic ours: 0.3671 ± 0.6909
Geodesic: 1.3261 ± 0.5328
Linear: 0.0000 ± 0.0000
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thank you for your response and hard work on your new findings. In my opinions, I believe these new findings are quite interesting. However, I dislike the fact that you did not denote whether these values are supposed to be good or bad depending on whether they are higher or lower. This will most likely confuse other reviewers. Anyway, I do have my interpretation about your new results --- I think your results on the mean curvature and avg. CLIP lengths and avg. Perceptual Path lengths are good.
I understand there are still some issues with other reviewers to be discussed. But I think the works which this paper follows are quite rigorous and provide some good background and motivation. The approach, presented in this paper, is certainly not adaptable and scalable to higher dimension (if I recall from reading correctly), but it is a different way to interpret the problem of analyzing un-normalized generative models. I find it quite fascinating, frankly.
Nonetheless, I would like to keep my score and I wish the authors best of luck. | Summary: This paper proposes a novel method to approximate the Fisher information metric. It shows superior performance to concurrent methods on Ising and Tasep models. Applied to diffusion models, it reveals a fractal structure of the latent space, and sharp transitions. It also allow for smoother interpolations.
## Update
I will keep my rating as is, since I won't be able to evaluate the paper revision and the newly added presentation/details. Moreover, experimental results are still not particularly strong and convincing when considering std of metrics.
Claims And Evidence: The paper claims that their method allows a good approximation of the Fisher information metric, which allows its use in thermodynamic models and on analysing latent space of diffusion models. To me, the empirical evidence is not strong enough (see below).
Methods And Evaluation Criteria: Method: The method is sound and well thought.
Evaluation criteria: The first evaluation criteria is a validation on exact statistical models for thermodynamics. I do not know well these models, and thus I am unsure of the interest of these results.
The second evaluation criteria is on the analysis of the latent space of diffusion models. The main weakness is the lack of comparisons with other methods. Why does the proposed method reveal things that would not be revealed by other methods? For example, it would be worth adding a simple baseline measuring the generator's local curvature (e.g. estimated with finite difference), which could also reveal these phase transitions. Moreover, it lacks metrics. Validating an idea based on a few visualizations is not rigorous enough. A potential metric could be the Perceptual Path Length introduced in StyleGan paper.
Theoretical Claims: I did not check the proofs. However, let us note that the theory proves the feasibility of their method rather than its superiority compared to other methods.
Experimental Designs Or Analyses: As mentioned above, the evaluation criteria is weak in my opinion. The first part is on toy data. The second part lacks comparisons with baselines.
Supplementary Material: I have not read the proofs in the supplementary material.
Relation To Broader Scientific Literature: The method described here is a novel method to approximate Fisher information metric in my knowledge.
Essential References Not Discussed: Not essential but would be worth discussing: 1: "Metrics for Deep Generative Models", Chen et al., AIStats 2018.
2: "Optimal transport maps for distribution preserving operations on latent spaces of Generative Models.", Agustsson et al., 2018.
Other Strengths And Weaknesses: Additionally to the lack of strong experiments, a second weakness is the paper clarity and the low level of details. To me, the method is not detailed enough. For example, what is described in 3.1 and 3.2 should be summarized in an Algorithm. Moreover, details required to reproduce the experiments are lacking, which is a problem.
Other Comments Or Suggestions: No.
Questions For Authors: No.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We thank the reviewer for highlighting gaps in our evaluation and presentation. The revision will strengthen empirical validation with baselines and metrics and improve clarity via algorithms and reproducibility details. To address concerns about comparison with other methods please refer to tables below. We hope these changes will address the concerns and demonstrate the method’s utility for analyzing generative model geometries.
# The General Response
We would like to thank the reviewers for recognizing our method to approximate Fisher information metric as novel and theoretically grounded.
Since most reviews raise the question of comparing our work with Wang & Ponce (2021) and Shao et al. (2018), we would like to provide a detailed comment on the distinction between our proposed method and these prior works.
Shao et al. studies VAE and computes geodesic curves based on pullback metric induced by euclidean metric in pixel space. Wang et al. consider GAN models and defines metric on the latent space via pullback of L2 distance in the feature space of VGG-19 (LPIPS distance). In both cases generative models are deterministic: each latent Z produces only a single image X.
The key distinction of our work lies in its consideration of a broader class of models, specifically, models with stochastic generation, where a single latent Z corresponds to a distribution in the image space p(X|Z). Diffusion models with stochastic sampling (and all statistical physics models) cannot be addressed within the formalism of Shao et al. (2018) or Wang et al. (2021).
To fulfill the request of the reviewers we compare our algorithm with Shao et al. (2018) and Wang et al. (2021) in deterministic sampling regime. We get pullback metric from the euclidean metric in the space of CLIP embeddings. The Jacobian is estimated via finite differences. We use three evaluation scores: CLIP, pixel and Perceptual Path Length (PPL), which measure the average path length in CLIP embedding space, pixel space and feature space of VGG-19. We compute them as the cumulative L2 distance between consecutive feature vectors (or images for pixel space), then average this perceptual divergence measure across multiple trajectories.
Average CLIP Lengths ± std (bootstrap):
--------------------------------------------------
Geodesic ours: 72.2951 ± 3.7516
Linear: 73.6076 ± 3.5428
Geodesic: 73.5764 ± 4.3664
Average Pixel Lengths ± std (bootstrap):
--------------------------------------------------
Geodesic ours: 2769146.3818 ± 23813.2801
Linear: 2757206.0770 ± 27731.2199
Geodesic: 2739432.4217 ± 35251.5666
Average Perceptual Path Lengths ± std (bootstrap):
--------------------------------------------------
Geodesic ours: 3.1172 ± 0.1587
Geodesic: 3.1857 ± 0.2134
Linear: 3.1725 ± 0.2256
We observe that our interpolation is on par with Shao et al. (2018) and Wang et al. (2021) in the case of deterministic sampling. We additionally compute the curvature of each trajectory as the mean angular change per unit length between consecutive path segments. Our analysis reveals that trajectories constructed using the Wang metric show significantly higher curvature and frequent turning compared to our method. We attribute this behavior to the finite differences, which introduce high frequency noise in metric components. In the case of diffusion, the Jacobian is hard to obtain via backpropagation as suggested in Shao et al. (2018) due to high computational cost.
Average Mean Curvature ± std (bootstrap):
--------------------------------------------------
Geodesic ours: 0.3671 ± 0.6909
Geodesic: 1.3261 ± 0.5328
Linear: 0.0000 ± 0.0000 | Summary: This paper leveraged the information geometry framework to understand the manifold geometry in statistical mechanics models parameter space and generative model latent space.
They provided nice theoretical results connecting the posterior distribution of parameter given infinite samples and the Bregman divergence of the parameter and the true ones. Further they tried to use this connection to learn the partition function Z, whose Hessian gives us the Fisher metric of the manifold. Then they use neural networks to estimate posterior distributions given samples and to learn the log partition function. They validated the pipeline on some classic statistical physcis model showing it can learn the free energy better than baseline. They also used it to study the latent space geometry in diffusion models, which elucidates a fractal structure in latent space.
## Update after rebuttal
The authors clarified most conceptual concerns regarding the method. (esp. the concerns about latent vector falling off shell in the final interpolation experiment.) The reviewer is happy to remain their score.
Claims And Evidence: - In the paper it sometimes said colloquially, “*estimating the log partition function log Z(t) by training a network to simulate p(t|x)*” But we can only know the log Z up to affine transform. So maybe we can qualify the claim about estimating log Z(t) ?
- The formulation makes a lot of sense for statistical mechanics setting, and I’m happy to see it learning the correct partition function.
But for the generative image model latent space setting (with feature extractor), it seems a bit overly complex. I’m not clear / convinced that this framework is better / more informative than the previous ones to compute the Riemannian metrics for the latent space of generative models (see related works [1,2]). So I don’t buy the claim that “*Notably, the approach proposed below is theoretically justified for any generative model*”
[1] Shao, H., Kumar, A., & Thomas Fletcher, P. (2018). The riemannian geometry of deep generative models.
[2] Wang, B., & Ponce, C. R. (2021). The geometry of deep generative image models and its applications. ICLR
Methods And Evaluation Criteria: Yes. Use of solvable statistical physics models as a test case suits the theoretical framework of the authors.
**Question:**
- **Major point**
The Eq 16-19 is a nice theoretical formulation for using feature extractor.
But in the end seems Eq.19 is not very differently from the previous Jacobian formulations, i.e. computing squared distances in some feature space and pulling back to the latent space. [2] In the end if we are using feature extractors, how does the metric you got for diffusion latent manifold similar / different from the metric obtained from pulling back the euclidean metric in feature space?
That could be done by simple finite difference method if you have all the sample images, no need to backprop through the diffusion sampling process. I guess it may not be more expensive than the MLP training in your cae.
- When you are using feature extractors, why should we learn log Z(t) ? I feel learning a a bivariable distance function d(t, t’) could also be possible or even faster to approximate the D_log Z(t,t’) directly.
- **Minor point:** In methods Sec 3.1, the neural network design were quite suitable for 2d settings, i.e. the Ising models are 2d and the parameter space is also 2d, so you can do 2d→2d unet mapping. But seems it’s hard to generalize this to higher dimensional parameter space?
[2] Wang, B., & Ponce, C. R. (2021). The geometry of deep generative image models and its applications. ICLR
Theoretical Claims: I skim through proof of Theorem 3.1 and Lemma 3.3 which seems to be correct. I didn’t check theorem 3.2
**Question:**
- **Major point:**
I cannot quite follow the logic from Eq.21 to 24. esp. 22. Why we can learn *log Zθ(t)* by minimizing the JS divergence in Eq.22? Is it because Theorem 3.1? Do we have some theoretical properties about the objective in Eq.22? Also does Eq. 21 require the latent space you are studying is compact, or the integration is impossible?
- **Minor point:**
from Eq. 16-19 the formalism is slightly off, I feel it’s mixing the deterministic sampling case and the stochastic sampling case…. esp. Eq. 19
- **Minor point:** In the degenerate case, where the generative mapping is deterministic, or $p(x|t)$ is delta measure, does the theory still work? This is relevant since for diffusion, if sampler is an ODE solver, then the mapping is deterministic. Similarly for latent space of GAN. I feel authors need to show this to justify the claim “*approach proposed below is theoretically justified for any generative model*”
- My hunch is that it (Eq. 4) will fall back to the Jacobian metric, which is the pull back metric of Euclidean metric in x space. Then it will be quite similar to the metric studied by previous works [0][3].
[1] Shao, H., Kumar, A., & Thomas Fletcher, P. (2018). The riemannian geometry of deep generative models.
[2] Wang, B., & Ponce, C. R. (2021). The geometry of deep generative image models and its applications. ICLR
Experimental Designs Or Analyses: - Experimental design and results in Fig. 3-4 are convincing, showing that the inference method can recover known results.
Questions:
- **Major point: Random selection of 3 initial latents and chart the 2d plane between them.**
I feel this design has the risk of falling off the manifold of Gaussian hypersphere. Since the trained latents were sampled from N(0,I), as long as your z_0,z_1,z_2 are not too close, many of their interpolations will be “off” the distribution and have incorrect norm, thus I’m not sure the score function / the sampler can do the right thing for those samples.
Can you consider a 2d submanifold on the sphere, similar to the way people interpolate latent space in GAN era ([3] [4] Fig1E).
- In previous works, people seems to find the manifold can look quite continuous if we perturb the initial state in the “good directions” found by PC (see [5] Fig.5 and supp). So I’m not sure if the fractal structure you found in your Fig. 6 is because we are charting a subspace in latent space that were unnatural for diffusion models.
- **Minor point:** For Fig. 6, it’s very cool to show this fractal boundary perceptually, is there a way to quantify the “fractalness” of it?
- **Minor point:** For Fig. 7C, we see pretty linear border separating the “phase” on the 2d space. I feel part of it is due to the model you used to approximate $\log Z(t)$ is a MLP with ReLU, which is piecewise linear. Thus their separating boundary have to be piecewise linear in a sense.
[3] White, T. (2016). Sampling generative networks. arXiv preprint arXiv:1609.04468.
[4] Wang, B., & Ponce, C. R. (2022). Tuning landscapes of the ventral stream. *Cell Reports*, *41*(6).
[5] Wang, B., & Vastola, J. J. (2023). Diffusion models generate images like painters: an analytical theory of outline first, details later. *arXiv:2303.02490*.
Supplementary Material: Yes the proof A. and method B.
Relation To Broader Scientific Literature: - The multiple links drawn between statistical physics and generative models were intriguing, different from many other previous links in DL theory.
Essential References Not Discussed: In the introduction section “*Riemannian geometry of latent space*” the authors forgot to cite / discuss one important paper along this line [^2], which also managed to compute and analyzed the structure of a Riemannian (Hessian) metric for the latent space of GANs. Since GAN is a deterministic mapping from latent space to image space, their Hessian metric of latent space is derived by pulling back a image distance function (e.g. LPIPS) and compute its Hessian at each point. Thus, in their case the potential function is $\phi_x(x’)=D(G(x),G(x’))$. The potential function is local to each point, and not necessarily global. But that framework can deal with higher dimensional manifolds > 2d, though without the gurantee as in *(Bryant–Amari–Armstrong)*.
The overall conceptual framework of the current paper share many similarities with [3], so it would be nice to discuss the connection and shared spirit with them.
[2] : Wang, B., & Ponce, C. R. (2021). The geometry of deep generative image models and its applications, ICLR, 2021
Other Strengths And Weaknesses: - The authors put efforts into the formalism and well made illustrations; they helped understanding a lot.
- The connection the authors draw between generative models and statistical physics models are interesting and quite original.
- Fig. 6 the visualization of the fractal and self-similar structure of the phase boundary within diffusion model latent space is quite novel and intriguing!
Other Comments Or Suggestions: No.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank author for the elaborated questions regarding theory and method.
- The proposed approach seems a bit overly complex, not clear whether it is better than established approach, that is, pulling back the euclidean metric in feature space [Wang & Ponce 2021] [Shao, Kumar, & Fletcher, 2018].
We admit that the proposed algorithm is more complex than Wang et al. (2021) and Shao et al (2018). However, we would like to highlight that this complexity comes from the broader coverage of the algorithm, namely, its ability to work with stochastics sampling, where pulling back the euclidean metric in feature space is impossible and results in stochastic metric tensor.
- Eq.19 is not very different from the previous Jacobian formulations
For numerical comparison with previous work please refer to the General Response section in responses to reviewers wCzF and GBhG for details.
- The transition from Eq.21 to 24. esp. 22. Why we can learn $log Z_θ(t)$ by minimizing the JS divergence in Eq.22? Do we have some theoretical properties about the objective in Eq.22?
Regarding the eq.22, in general one could use other loss functions for distance between distributions. We discuss after Theorem 3.1 that MSE loss exhibits vanishing gradients and thus is not suitable in our case. Since Jensen-Shannon divergence is the proper metric on probability distributions therefore its convergence to zero guarantees the convergence of MSE loss.
- Also does Eq. 21 require the latent space you are studying is compact, or the integration is impossible?
Yes, latent space has to be compact and we require that integration over it is possible.
- If the generative mapping is deterministic (in a case delta measure), does the theory still work?
Since delta measure could be uniformly approximated by Gaussians and for any Gaussian the theory is justified, it can be proven the approach still works in the limit.
- This design has the risk of falling off the manifold of Gaussian hypersphere
Indeed, this is a valid point which is missed in the main text. To prevent our interpolation to fall off the Gaussian hypersphere in all our experiments we employed a normalization of latent vector z.
---
Rebuttal Comment 1.1:
Comment: Thank you for the concise response. The falling-off manifold & normalization point is crucial for evaluating the results, and the authors should emphasize it in the final version of the paper.
We will keep the score as is. | Summary: This paper introduces a novel approach to exploring the geometric structure of latent
spaces in generative models by estimating the Fisher metric and investigating phase
transitions. Through theoretical developments and empirical validation on statistical
physics models and Stable Diffusion. Key findings include the identification of
distinct phases within the latent space, fractal-like boundaries between these phases,
and the property that geodesic interpolations are linear within phases while exhibiting
nonlinear behavior at phase boundaries. Overall, this work provides new insights into
the latent space of generative models.
Claims And Evidence: Claims : A novel method to reconstruct the Fisher metric in latent spaces of
generative models.
Evidence 1: The authors give theoretical foundations provided through Theorems
3.1 and 3.2.
Methods And Evaluation Criteria: Using u2-net and CLIP for different generative models. The authors compare the
RMSE in the statistical physical models experiment and did geodesic
interpolation analysis and phase transition identification in diffusion models
experiment.
Problems : When doing the feature extraction, different feature extractors could
yield varying results. What may happen if the authors use different feature
extractors.
Theoretical Claims: The research extends our understanding of generative models beyond traditional
perspectives, revealing their intricate geometric structures and phase transition
characteristics. Mathematical details are correct, the proof process is rigorous, and the
use of symbols is standardized. The paper is very solid in theoretical derivation.
Experimental Designs Or Analyses: Experiment primarily validated on specific models(Ising and TASEP) and stable
diffusion models. But there exist Generalizability Concerns. The experiment primarily
validated on specific models but it is unclear universal applicability across all
generative models.
Supplementary Material: Appendix B parts.
Relation To Broader Scientific Literature: none
Essential References Not Discussed: A brand new information geometry method and I cannot give relative references.
Other Strengths And Weaknesses: (1) Strengths:
Theoretical Innovation: Bridges multiple disciplines: statistical physics,
information geometry, machine learning
Methodological Contributions: Novel approach to reconstructing Fisher metric
and give theoretical convergence guarantees
(2) Weakness:
Generalizability Limitations: what about other feature extractors and generative
models ?
Computational Complexity: Theoretical proofs suggest scaling challenges and
there may potential computational overhead for large-scale models
Other Comments Or Suggestions: none
Questions For Authors: 1) Generalizability: How confident are you that the proposed Fisher metric
reconstruction method can be generalized beyond the specific models and about
different generative models
2) Feature Extraction Sensitivity: How robust is your method to different feature
extraction techniques
3) Computational Complexity: could you addressing scalability challenges in high-
dimensional latent spaces?
4) Modeling application: how could apply this analysis theory and improve the
model performance ?
5) Phase Transition: observed fractal-like boundaries in latent spaces? What underlying physical or
mathematical principles explain these transitions?
Ethical Review Concerns: none
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback and hope that the response below will adress the raised points.
- How confident are you that the proposed Fisher metric reconstruction method can be generalized beyond the specific models?
We are fairly confident it can generalize to other types of generative models, since it is justified by Bryant–Amari–Armstrong theorem for 2D section of a latent spaces. For higher-dimensional spaces it is not guaranteed to work. Beyond stable diffusion models, we tested the method with StyleGAN3, please see Fig.9: "Free energy surface of a StyleGAN v3 reconstructed using CLIP distance" in supplementary material. The learned surface has a little curvature and exhibits a single phase, which is in agreement with previously done research by Wang & Ponce (2021).
- Different feature extractors could yield varying results
During our evalutation done with CLIP, pixel-wise distances and PPL (perceprual path length) we obtained grid approximations of local metric following Wang & Ponce (2021). We observed that phase boundaries are stable under tested feature extractors. Because of this, we conclude that our algorithm will learn the same phase boundaries. Please see the General Response section in reponses to reviewers wCzF and GBhG for details.
- Scalability challenges in high-dimensional latent spaces
Our current method is justified for 2D section of high-dimensional latent spaces as we follow Amari and Armstrong (2013) and Bryant (2024).
Though the training procedure can be extended by CNN that predicts parameters of gaussian mixture models, the theoretical ground is the subject of futher research.
- Phase Transition: What underlying physical or mathematical principles explain these transitions?
We suppose that this phenomenon is related to the structure of image space, as studied in the ICLR 2023 paper “Verifying the Union of Manifolds Hypothesis for Image Data” by Brown et al. In this paper, it is shown that image data lies on a disjoint union of lower-dimensional manifolds with varying dimensions. The generative process of diffusion models begins with a Gaussian distribution defined on a full-dimensional latent space, and the reverse ODE trajectories end at significantly lower-dimensional, disjoint manifolds that represent real image data. Such a mapping—from a higher-dimensional unimodal distribution to a multimodal data manifold with disjoint supports for each mode—may exhibit a diverging Lyapunov exponent, or, in other words, a diverging Lipschitz constant for the generative mapping from the latent space to the data space.
This phenomenon can be illustrated by the following Proposition, which simulates a lower-dimensional data manifold with disjoint supports.
# Proposition
Suppose that the (target) data distribution is a bimodal mixture of two gaussians, each with variance $\sigma$:
$$
p_0(x) = \frac{1}{2}\mathcal{N}(x\mid -1,\sigma^2) + \frac{1}{2}\mathcal{N}(x\mid 1,\sigma^2)
$$
The latent (source) distribution is the standard normal $\mathcal{N}(x\mid 0,1)$. Consider the Variance Preserving SDE
$$
dX_t = -\frac{1}{2}\beta X_t dt + \sqrt{\beta} dW_t
$$
Then the Lyapunov exponent of the corresponding reverse-time ODE at $x=0$ has the following form
$$
\lambda=\frac{\beta}{2}\left(1+\frac{1-\sigma^2}{\sigma^4}\right)
$$
and it diverges to infinity as $\sigma$ goes to zero. In this case the point $x=0$ could be interpreted as a phase transition boundary.
# Proof
We begin with the reverse probability flow ODE written as
$$
\frac{dX_s}{ds}=-f(X_s,t)+\frac{g(t)^2}{2}\nabla_x\log p_t(X_s),
$$
where the drift term is
$$
v(x)= -f(x,t)+\frac{g(t)^2}{2}\nabla_x\log p_t(x).
$$
Linearizing around $x=0$, the Lyapunov exponent is defined by
$$
\lambda=v'(0)=-f'(0,t)+\frac{g(t)^2}{2}\frac{d^2}{dx^2}\log p_t(x).
$$
The density p_t(x) is computed via convolution with the gaussian noising kernel
$$
p_t(x)=\int_{-\infty}^{\infty} p_0(y)\mathcal{N}\Bigl(x\Big|e^{-\frac{1}{2}\beta t}y,1-e^{-\beta t}\Bigr)dy.
$$
Since convolution of gaussians is still gaussian we obtain
$$
p_t(x)=\frac{1}{2\sqrt{2\pi}\sigma_{1}(t)}A(x)
$$
where
$$
A(x)=e^{-\frac{(x-\mu(t))^2}{2\sigma_1^2(t)}}+e^{-\frac{(x+\mu(t))^2}{2\sigma_1^2(t)}}
$$
and
$$
\sigma_{1}^2(t)=e^{-\beta t}\sigma^2+(1-e^{-\beta t})
$$
Then by computing derivatives in the definition of Lyapunov exponent we get
$$
\lambda = \frac{\beta}{2} + \frac{\beta}{2} \frac{e^{-\beta t} - \sigma_1^2(t)}{\sigma_1^4(t)}
= \frac{\beta}{2} (1 + \frac{e^{-\beta t} - \sigma_1^2(t)}{\sigma_1^4(t)})
$$
As time goes to 0 we get $\sigma_1 \rightarrow \sigma$, and have diverging Lyapunov exponent for small $\sigma$
$$
\lambda=\frac{\beta}{2}\left(1+\frac{1-\sigma^2}{\sigma^4}\right)
$$
suggesting divergence of close reverse ODE trajectories.
End of the Proof. | null | null | null | null | null | null |
Conformal Tail Risk Control for Large Language Model Alignment | Accept (poster) | Summary: The paper studies the problem of making sure that the LLM outputs align with human preferences. To this end, they construct an approach where the output of the LLM is returned only if its machine (disutility) score is lower than a "toxicity" threshold. Since machine scores can be different than "true" human scores, they construct a calibration procedure based on risk control. This allows them to find a threshold that guarantees that some tail statistics (e.g. ,CVaR) of the true disutility score of the returned answers will be lower than a user-chosen control level alpha.
Claims And Evidence: /
Methods And Evaluation Criteria: /
Theoretical Claims: /
Experimental Designs Or Analyses: /
Supplementary Material: /
Relation To Broader Scientific Literature: /
Essential References Not Discussed: /
Other Strengths And Weaknesses: Strengths:
- The setting considered is interesting and, to the best of my knowledge, novel.
- The experiments mainly confirm the claims made in the paper.
Weaknesses:
- The manuscript is missing a related work section. Moreover, the background is written quite confusingly in my opinion. This makes it hard to understand what theoretical statements are novel and which are part of the prior work. Concretely, from my understanding the original risk control paper [1] has been extended from expected loss to VaR/CVaR loss already in [2]. So the contribution of this manuscript is not in moving from the expected loss to VaR/CVaR, but rather only in considering L-statistics to construct an upper confidence bound needed to find the risk-controlling threshold (Section 3.2). Could the authors elaborate on this? Especially, could the authors clarify in detail how their work extends and differs from [2]
- The main "cost" of calibration are the human annotations needed. For $n$ prompts and $N$ LLM samples per prompt, this amounts to $n \times N$ human annotations. The size of the calibration dataset in the experiments is $n=6000$ and $N=40$ which would require $240K$ human annotations. This contradicts some of the claims made by the authors of their approach being 'light-weight'. The authors circumvent this in their experiments by replacing human annotations with the Detoxify model. One possible way to address this issue, would be to show that their calibration is still statistically efficient (i.e., the empirical test risk being close to the diagonal) for smaller calibration set sizes, e.g. when using only $n=100$ prompts. Note that risk control for smaller calibration sizes has been studied before, see for example [3]
- One nitpick: the authors talk about conformal risk control which I find a bit confusing, since the original conformal risk control paper [4] is about controlling the risk in expectation (w.r.t to the draw of a calibration dataset), whereas this manuscript is about providing the guarantee with high probability making it more similar to [1] than to [4]
[1] Bates, S., Angelopoulos, A., Lei, L., Malik, J. and Jordan, M. Distribution-free, risk-controlling prediction sets. Journal of the ACM (JACM) 2021
[2] Snell, J.C., Zollo, T.P., Deng, Z., Pitassi, T. and Zemel, R. Quantile risk control: A flexible framework for bounding the probability of high-loss predictions. ICLR 2023
[3] Jazbec, M., Timans, A., Veljković, T.H., Sakmann, K., Zhang, D., Naesseth, C.A. and Nalisnick, E. Fast yet safe: Early-exiting with risk control. NeurIPS 2024
[4] Angelopoulos, A.N., Bates, S., Fisch, A., Lei, L. and Schuster, T. Conformal risk control. ICLR 2024
Other Comments Or Suggestions: /
Questions For Authors: /
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful feedback. We are encouraged by the recognition of key strengths in our submission, including:
- The **novelty** of the problem setting—controlling tail risk in LLM outputs with distribution-free guarantees;
- The **practical relevance and effectiveness** of our method, supported by empirical results.
### **Weakness 1: Clarifying novelty beyond QRC [2]**
Thank you for the thoughtful feedback. In [2], QRC relies on concentration inequalities on the cumulative distribution functions (CDFs), specifically the BJ and DKW inequalities, to derive uniform upper bounds between the empirical and true quantiles. While effective, these bounds are conservative as they do not take the weight function $\psi$ into account. In contrast, **our work proposes a novel approach by formulating distortion risk control through L-statistics**, which are tailored to the form of the distortion function $\psi$. This allows us to:
- **Directly bound the distortion risk functional** rather than the entire CDF;
- **Leverage asymptotic normality** of L-statistics to obtain asymptotically tight , more efficient bounds;
- **Reduce sampling costs significantly** while maintaining risk guarantees, as demonstrated in our empirical results.
We will revise the introduction and background to clearly differentiate our contribution from prior work.
### **Weakness 2: Statistical efficiency at smaller calibration sizes**
This is a valid concern. We conducted an ablation experiment on the calibration sizes and observed how deployment cost changes with that. This is illustrated in the following tables.
We observe that as the calibration size decreases, all methods become more conservative—realized costs increase and risk metrics deviate more from their expected values. However, DRC-L is still statistically more efficient. Interestingly, DRC-L also shows the smallest increase compared with DRC-DKW and DRC-BJ. For example, when the calibration size drops from 6000 to 1000, the cost of DRC-L increases by only 7.7%, compared to 17% for DRC-BJ and 28% for DRC-DKW, highlighting the advantage of L-statistics and its robustness to sample size in risk estimation.
Table 1: Realized average cost with calibration size n of our method (DRC-L) and baselines for CVaR, $\alpha = 0.25$, $\beta = 0.75$
| n | DRC-BJ | DRC-DKW | **DRC-L** |
|-------------|--------|---------|--------|
| 1000 | 5.3074 | 6.5431 | **4.5604** |
| 2000 | 4.9560 | 5.8082 | **4.4184** |
| 3000 | 4.7769 | 5.3465 | **4.3156** |
| 6000 | 4.5313 | 5.1037 | **4.2362** |
Table 2: Realized CVaR with calibration size n of our method (DRC-L) and baselines for $\alpha = 0.25$, $\beta = 0.75$
| Sample Size | DRC-BJ | DRC-DKW | **DRC-L** |
|-------------|---------|---------|--------|
| 1000 | 0.1834 | 0.1247 | **0.2236** |
| 2000 | 0.2041 | 0.1536 | **0.2305** |
| 3000 | 0.2128 | 0.1720 | **0.2367** |
| 6000 | 0.2162 | 0.1884 | **0.2396** |
Table 3: Realized average cost with calibration size n of our method (DRC-L) and baselines for VaR, $\alpha = 0.25$, $\beta = 0.75$
| n | DRC-BJ | DRC-DKW | **DRC-L** |
|-------------|--------|---------|--------|
| 1000 | 2.4266 | 2.3870 | **2.1309** |
| 2000 | 2.3063 | 2.2917 | **2.1167** |
| 3000 | 2.2181 | 2.2524 | **2.1174** |
| 6000 | 2.2335 | 2.2140 | **2.0955** |
Table 4: Realized VaR with calibration size n of our method (DRC-L) and baselines for $\alpha = 0.25$, $\beta = 0.75$
| Sample Size | DRC-BJ | DRC-DKW | **DRC-L** |
|-------------|---------|---------|--------|
| 1000 | 0.1954 | 0.2006 | **0.2423** |
| 2000 | 0.2117 | 0.2150 | **0.2444** |
| 3000 | 0.2233 | 0.2197 | **0.2442** |
| 6000 | 0.2237 | 0.2274 | **0.2495** |
We hope this addresses your concern. Please let us know if there are any other aspects you'd like us to discuss further.
### **Weakness 3: Terminology clarification on “Conformal Risk Control”**
We appreciate this point. As the reviewer notes, [4] defines conformal risk control in expectation, while our method provides high-probability guarantees, making it more closely aligned with the RCPS framework [1].
In our manuscript, we use the term “conformal risk control” more broadly to refer to the overarching goal of risk-aware inference under distribution-free guarantees. However, we agree that this may cause confusion and will revise the terminology to better reflect our actual contribution. Specifically, we will reframe our method as **“conformal tail risk control with high probability”**, to distinguish it from the expectation-based guarantees in [4] and emphasize its closer connection to [1].
We appreciate this clarification and will ensure the distinction is made explicit in the revised manuscript. Let us know if you would like us to elaborate on this point further.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal.
It's cool to see that your method remains statistically efficient also for smaller calibration sets. However, n=1000 would still require 40k human annotations, so I still think it would be valuable to bring n down even further for this particular experiment (e.g., n=100 or n=200).
I increased my score to 3 (in good faith that the authors will deliver on their promise and rewrite the intro + related work + background in such a way that their contribution over QRC will be made more clear)
---
Reply to Comment 1.1.1:
Comment: Thank you for your suggestion—this encouraged us to further evaluate our method under smaller calibration set sizes. For small calibration sizes (e.g.,n= 50 or 100), DRC-BJ and DRC-DKW become substantially more conservative, while our method, DRC-L, is only marginally more so. In addition, comparing the cost of the three methods in the n=1000 to the n = 50 setting in Table 1:
- The cost of DRC-BJ increases by ~184% (from 3.2131 to 9.1191),
- DRC-DKW increases by ~63% (from 3.5322 to 5.7575),
- while DRC-L increases by only ~20% (from 2.8693 to 3.4364).
This highlights that DRC-L maintains its statistical efficiency and continues to offer well-calibrated risk control.
We summarize the results below:
Table 1: Realized average cost with calibration size n of our method (DRC-L) and baselines for CVaR, $\alpha = 0.25$, $\beta = 0.75$
| n | DRC-BJ | DRC-DKW | **DRC-L** |
|-----|------------------|------------------|-------------------|
| 50 | 9.1191 ± 1.1198 | 5.7575 ± 1.4461 | **3.4364 ± 0.6710** |
| 100 | 5.5664 ± 0.5813 | 6.0425 ± 1.2632 | **3.2318 ± 0.3999** |
| 200 | 4.1456 ± 0.4090 | 4.9123 ± 0.3920 | **3.0617 ± 0.1809** |
| 1000 | 3.2131 ± 0.1218 | 3.5322 ± 0.1419 | **2.8693 ± 0.1447** |
| 2000 | 3.0278 ± 0.0822 | 3.2340 ± 0.0734 | **2.7907 ± 0.1035** |
| 3000 | 2.9480 ± 0.0868 | 3.1170 ± 0.0867 | **2.7617 ± 0.1027** |
Table 2: Realized CVaR with calibration size n of our method (DRC-L) and baselines for $\alpha = 0.25$, $\beta = 0.75$
| n | DRC-BJ | DRC-DKW | **DRC-L** |
|-----|------------------|------------------|-------------------|
| 50 | 0.0222 ± 0.0114 | 0.0944 ± 0.0457 | **0.1938 ± 0.0443** |
| 100 | 0.0940 ± 0.0202 | 0.0842 ± 0.0347 | **0.2032 ± 0.0301** |
| 200 | 0.1458 ± 0.0180 | 0.1153 ± 0.0134 | **0.2149 ± 0.0151** |
| 1000 | 0.2020 ± 0.0136 | 0.1793 ± 0.0122 | **0.2318 ± 0.0151** |
| 2000 | 0.2172 ± 0.0100 | 0.2003 ± 0.0081 | **0.2394 ± 0.0103** |
| 3000 | 0.2239 ± 0.0103 | 0.2100 ± 0.0091 | **0.2414 ± 0.0103** |
Table 3: Realized average cost with calibration size n of our method (DRC-L) and baselines for VaR, $\alpha = 0.25$, $\beta = 0.75$
| Sample Size | DRC-BJ | DRC-DKW | **DRC-DRC_bootstrap** |
|-------------|------------------|------------------|------------------------|
| 50 | 4.0985 ± 0.7236 | 4.2995 ± 0.7683 | **2.1603 ± 0.3348** |
| 100 | 3.1811 ± 0.5313 | 3.2565 ± 0.5611 | **2.1687 ± 0.2745** |
| 200 | 2.9879 ± 0.3715 | 2.9655 ± 0.3716 | **2.2334 ± 0.2006** |
| 1000 | 2.4266 ± 0.0894 | 2.3870 ± 0.0918 | **2.1309 ± 0.0762** |
| 2000 | 2.3063 ± 0.0814 | 2.2917 ± 0.0860 | **2.1167 ± 0.0520** |
| 3000 | 2.2181 ± 0.0548 | 2.2524 ± 0.0649 | **2.1174 ± 0.0423** |
Table 4: Realized VaR with calibration size n of our method (DRC-L) and baselines for $\alpha = 0.25$, $\beta = 0.75$
| Sample Size | DRC-BJ | DRC-DKW | **DRC-DRC_bootstrap** |
|-------------|------------------|------------------|------------------------|
| 50 | 0.0795 ± 0.0321 | 0.0722 ± 0.0309 | **0.2535 ± 0.0700** |
| 100 | 0.1319 ± 0.0387 | 0.1233 ± 0.0380 | **0.2460 ± 0.0616** |
| 200 | 0.1400 ± 0.0329 | 0.1417 ± 0.0332 | **0.2280 ± 0.0345** |
| 1000 | 0.1954 ± 0.0126 | 0.2006 ± 0.0136 | **0.2423 ± 0.0156** |
| 2000 | 0.2117 ± 0.0132 | 0.2150 ± 0.0138 | **0.2444 ± 0.0145** |
| 3000 | 0.2233 ± 0.0131 | 0.2197 ± 0.0098 | **0.2442 ± 0.0112** |
We hope this additional analysis addresses your concern. Please don't hesitate to let us know if there are any further aspects you'd like us to clarify or explore. | Summary: To avoid the high cost of human annotations, researchers have developed automatic scoring models to assess the tail events produced by LLMs, such as toxic answers.
However, there may be a misalignment between human judgement and model scoring. To solve this issue, this study proposes a lightweight calibration framework through the lens of risk control to ensure the alignment of humans and machines with provable guarantees. In addition, the authors demonstrate the utility of their calibration framework through experiments on a semi-synthetic benchmark.
Claims And Evidence: Lines 76-78: "In this work, we explore how distortion risk control can be applied to *align LLMs with respect to any disutility metric* ...";
Lines 59-62: "Although *machine ratings* are inexpensive and scalable, the misalignment, or lack of rank preservation between the machine and human ratings diminishes its reliability. ".
Which component should be aligned with humans in this work: the LLM or the toxicity classifier? A more explicit clarification or a overall pipeline **diagram** could facilitate better understanding.
Methods And Evaluation Criteria: This work requires valid baseline comparisons, not just comparisons between variations of the proposed method.
For existing alignment methods such as RLHF, the authors should consider providing a quantitative or qualitative comparison table to summarize and highlight the advantages of the proposed method over existing studies.
Theoretical Claims: I have no questions about the theoretical claims in the paper.
Experimental Designs Or Analyses: 1. Lines 378-379: "... dataset that consists of c% of most and least toxic instances". Could you provide a more detailed explanation for this?
2. The experimental results all come from a single model, Llama-2-7B. More LLMs should be taken into consideration to avoid the randomness of results and ensure the effectiveness of conclusions.
In particular, possible differences in sampling costs between LLMs might make the cost analysis more complete.
Supplementary Material: I have reviewed the codes in the supplementary material and have no questions about them.
Relation To Broader Scientific Literature: This work proposes a lightweight and provably guaranteed value alignment framework for LLM.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1. The writing and symbols used are rigorous and standardized.
2. A new alignment method is proposed that does not require LLM parameter updating.
Weakness:
1. The practicality of the method is questionable. The method requires that the prompt distribution is consistent with the data, which will impose requirements on the number and quality of prompts required for correction; on the other hand, there is a lack of real-world experiments. In practical applications, for a given risk level, whether an effective λ can be found and the required conditions are unclear.
2. The experimental results all come from a single model, Llama-2-7B, which cannot ensure the effectiveness of the method and conclusions enough.
Other Comments Or Suggestions: See above sections.
Questions For Authors: See above sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s thoughtful feedback and are encouraged by the recognition of our paper’s strengths, including:
- The **novelty** of the problem setting—tail risk control in LLM alignment with provable guarantees;
- The **rigor and clarity** of our writing and symbolic notation;
- The practicality of our **lightweight alignment** method that avoids LLM retraining.
### **Comparison with RLHF and existing alignment methods**
Thank you for the suggestion. Our method differs fundamentally from RLHF-based techniques. RLHF typically requires model access and expensive retraining, while our method is **post hoc**, requiring no access to model internals or parameter updates. It is thus more scalable and deployable.
Further, RLHF lacks formal guarantees for safety or tail risk control. Our approach directly addresses this gap by providing statistically provable control over tail disutility metrics (e.g., CVaR and VaR). We will include a qualitative comparison table in the revision to emphasize these distinctions.
### **Weakness 1: Practicality, real-world experiments, and prompt distribution assumptions**
We agree that practicality is essential. We conducted real-world evaluations using Qwen2.5-1.5B and LLaMA3.2-3B on the RealToxicPrompts (RTP) dataset, beyond LLaMA-2-7B. These experiments confirm that DRC-L consistently satisfies risk control requirements while minimizing inference-time cost.
Regarding distribution assumptions: while all conformal methods require the calibration distribution to reflect deployment,**our framework supports reweighting calibration examples by density ratios (aka importance weights) to handle distribution shifts**, thus improving robustness in practice. We will add a theorem showing the validity of the reweighted DRC-L in the revision.
We also empirically find that for a given risk level, a reliable $\lambda$ threshold can be estimated adaptively, even with modest calibration sizes. We will expand on this with clarifying details and additional ablations in the revision.
### **Weakness 2: Reliance on a single model**
Thank you for the feedback. We have conducted additional experiments using the Llama3.2-3B and Qwen2.5-1.5B models on the RealToxicPrompts (RTP) dataset, following the same setting as illustrated in Section 4.1 in our manuscript. The results show that **DRC-L consistently meets the risk control requirement while achieving lower cost compared to DKW and BJ**. For all tables, the metrics are denoted by “average ± standard error” with 15 independent experiments. Below we show the sample results of the Llama3.2-3B model with $\beta = 0.5$, and $\alpha = 0.25$. We have additional experiments with other models (e.g., Qwen2.5-1.5B), and different settings of $\alpha$ and $\beta$.
Table 1: Realized CVaR and average cost on RTP dataset with Llama3.2-3B model.
| Method | $\beta$| $\alpha$ | Realized CVaR | Cost |
|---------|------|-------|----------------------|----------------------|
| DRC-BJ | 0.5 | 0.25 | $0.23296 \pm 0.00520$ | $1.71155 \pm 0.02993$ |
| DRC-DKW | 0.5 | 0.25 | $0.22205 \pm 0.00544$ | $1.73714 \pm 0.03235$ |
| DRC-L | 0.5 | 0.25 | $0.24469 \pm 0.00534$ | $1.65423 \pm 0.02630$ |
Table 2: Realized VaR and average cost on RTP dataset with Llama3.2-3B model.
| Method | $\beta$| $\alpha$| Realized VaR | Cost |
|----------------|------|-------|----------------------|-----------------------|
| DRC-BJ | 0.5 | 0.25 | $0.21830 \pm 0.00831$ | $1.11849 \pm 0.01100$ |
| DRC-DKW | 0.5 | 0.25 | $0.22295 \pm 0.00805$ | $1.11138 \pm 0.01153$ |
| DRC-L | 0.5 | 0.25 | $0.24219 \pm 0.00905$ | $1.08460 \pm 0.01146$ |
We will add these results to the paper to strengthen the empirical claims.
### **Clarification: Biased Scoring Model via c% Extremes**
Thank you for raising this. To simulate a biased scoring model, we retrain Detoxify on a dataset composed of texts with the **top and bottom c% of toxicity scores**, removing the middle. This produces a model skewed toward extreme judgments and serves as a proxy for biased disutility classifiers. We will revise the manuscript to clarify this process.
### **Clarification: Which component is aligned—LLM or classifier?**
This is a great question. Our method supports **two complementary alignment views**:
1. **Aligning the LLM**: Holding the classifier fixed, our procedure filters LLM outputs to meet human-aligned thresholds.
3. **Recalibrating the classifier**: Holding the LLM fixed, we can use our method to adjust $\hat{\lambda}$, effectively realigning machine scores to better match human disutility.
We will include a pipeline diagram in the revised manuscript to illustrate this dual perspective and how the components interact. | Summary: This paper focuses on the application of conformal prediction to tail events, which can lead to poor outcomes. The authors propose a lightweight calibration framework for black-box models that ensures alignment between humans and machines with provable guarantees. They utilize L-statistics, the DKW inequality, and Berk-Jones statistics for conformal risk control. Extensive experiments are conducted to validate their proposed method. The application of statistical methods to conformal risk control is an interesting direction.
Claims And Evidence: Yes, they claim that their method addresses the issue of unexpectedly poor outcomes by aligning humans and machines with provable guarantees. The experimental results on the CVaR metric support their claim, demonstrating the effectiveness of their approach. Additionally, the experiment on deployment cost confirms their intuition that better-aligned machine ratings reduce the cost of calibration.
Methods And Evaluation Criteria: The paper proposes applying three different statistical methods for conformal risk control. However, it does not provide any justification for why these specific methods are needed. For instance, in Section 3.2, it would be beneficial to explain why the L-statistic is particularly suitable for conformal risk control rather than relying solely on theoretical proof.
Additionally, in Figures 4 and 5, if my understanding is correct, the first row represents realized CVaR versus α, while the second row shows average sampling cost versus α. However, these figures appear too similar, which may cause confusion for readers. To improve clarity, it would be better to introduce more distinct differences between the figures rather than relying solely on the text captions.
For Evaluation Criteria, the cvar metric make sense for supporting their claims, the deployment cost also confirms our intuition that better-aligned machine ratings reduces the cost of calibration.
Theoretical Claims: Yes, they are correct
Experimental Designs Or Analyses: Yes, their experiments support the claims made in the results section, as they conduct extensive evaluations to demonstrate the effectiveness of their proposed method. However, they do not appear to include baseline models, which raises a question: can standard conformal risk control models not achieve the goals outlined in this paper? Additionally, their experiments are limited to LLaMA-7B and a single dataset, which restricts the generalizability of their conclusions. Expanding the evaluation to multiple models and datasets would strengthen the validity of their findings.
Supplementary Material: Yes, mainly the additional experiments.
Relation To Broader Scientific Literature: This method contributes by addressing unexpectedly poor outcomes using a statistical approach, a problem that traditional methods do not specifically tackle. The focus on this issue is particularly meaningful, as mitigating unexpected poor outcomes is crucial, especially in industrial settings where such failures can result in significant costs.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Weakness: 1.Writing is a major weakness of this paper, e.g. Lack of related work, no contribution section in introduction, the experiment is also too short.
2.Their experiment setting seems not comprehensive as we talked before. The caption of figure is not accurate.
Strength:
1. They provide detailed theoretical analysis for their proposed method.
2.The proposed method is intriguing and has the potential to make a significant impact in this field.
3.The topic is interesting and important.
4.Figure 1 and Figure 2 are insightful.
Other Comments Or Suggestions: Find in previous section
Questions For Authors: Is it possible to compare the performance of other conformal risk control methods on the problem addressed in this paper? Is the proposed method the only viable approach for conformal tail risk control? Additionally, could you explain why the three statistical methods exhibit different performance levels? What make the difference of performance to be consistent?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive comments and questions. We are encouraged by the recognition of several key strengths:
- Our **detailed theoretical analysis** and use of L-statistics in conformal risk control;
- The **practical relevance** of our proposed framework, including its lightweight, post hoc nature and ability to provide provable guarantees without retraining;
- The **importance and novelty** of tackling tail risk in machine-human alignment for LLMs.
Below, we address the concerns in detail:
### **Weakness: Writing, Related Work, and Experiments**
Thank you for this valuable feedback. In the revised version, we will:
- Add a **Related Work** section, situating our approach alongside prior work on conformal risk control, and LLM alignment;
- Reorganize the **Introduction** to better highlight our core contributions;
- **Expand the experimental section** with new results using additional LLMs (LLaMA3.2-3B and Qwen2.5-1.5B) and larger models, e.g., Llama3.2 8B and Qwen2.5-7B, with more datasets (e.g., RealToxicPrompts), best-of-N baselines, and cost ablations.
### **Weakness: Similarity between Figures 4 and 5**
Thank you for pointing this out. Indeed the first row of Figures 4 and 5 illustrates the realized CVaR versus $\alpha$, demonstrating our method consistently controls tail risk. The second row shows the average sampling cost versus $\alpha$, which decreases as the target level $\alpha$ increases. Although they serve different purposes (tail risk control vs. efficiency), the visual similarity can be misleading.
To improve clarity, we propose **reformatting the cost results** to highlight the differences more explicitly. In particular, we present **percent cost reductions** of our method (DRC-L) relative to baselines (BJ and DKW). Sample results are shown to demonstrate the efficiency gains provided by our approach (we have complete results for other LLMs, and difference choices of $\rho$, and $\beta$)
Table 1: Percent cost reduction for CVaR of our method (DRC-L) compared to baselines, with $\rho = 0.78$, $\beta = 0.5$
| Method | $\alpha$ = 0.15 | $\alpha$ = 0.2 | $\alpha$ = 0.25 | $\alpha$ = 0.3 | $\alpha$ = 0.35 |
|------------|----------|--------|----------|--------|----------|
| DRC-BJ | 5.75 | 6.64 | 5.76 | 1.39 | 0.52 |
| DRC-DKW | 30.54 | 26.38 | 21.46 | 18.08 | 15.24 |
We will update the figures and accompanying text in the revision to improve clarity and better guide the reader.
### **Justification for L-statistics vs. BJ and DKW**
Thank you for this important point. Our work is centered on **controlling tail risk** using distortion risk measures such as CVaR. Prior work like Quantile Risk Control (QRC) by Snell et al., extended conformal methods to tail risks using concentration inequalities like BJ and DKW, which provide general-purpose CDF bounds. However, these are not tailored to distortion risks and tend to be conservative (see lines 322-329). Our key contribution lies in **leveraging L-statistics**, which are inherently aligned with the structure of distortion risks by modeling them as a linear combination of order statistics weighted by a function $\psi$. This allows us to:
- **Directly bound the risk functional**;
- Achieve **tight asymptotic approximations**;
- Deliver **lower deployment cost** with valid guarantees.
We will revise Section 3.2 to explicitly justify this choice and contrast it with baselines.
### **Is this the only viable approach for conformal tail risk control?**
Our work builds directly on QRC (Snell et al.), which introduced BJ and DKW for tail risk control. These methods are described in Section 3.4. Our work is the **first to use L-statistics** in the given context. While other methods like BJ and DKW are viable, our empirical results consistently show that L-statistics:
- Better **match the structure of distortion risks**;
- Provide **sharper bounds** and lower sample costs;
- Are **less conservative** for distortion risk control.
We will clarify this distinction in the revised manuscript.
### **Why do BJ, DKW, and L-statistics perform differently?**
The performance differences stem from how each method models uncertainty:
- **DKW** offers uniform control over the CDF but does not emphasize the tail—making it overly cautious for rare events.
- **BJ** is more sensitive to small tail deviations than DKW but remains a general-purpose bound and thus conservative. Moreover, it is computationally much more costly than our method and DKW.
- **L-statistics**, in contrast, are designed for distortion risk, directly estimating tail-weighted quantiles using the given weight $\psi$. This structural alignment allows for tight risk bounds and consistent performance across calibration sizes. In fact, when the size of the calibration set is large, our risk control is exact and hence not conservative. | Summary: The paper proposes an inference-time alignment procedure to control the risks associated with LLM outputs. It assumes access to a disutility function that can score LLM’s outputs. It works by generating LLM responses until it gets a response that has disutility score below a threshold $\hat{\lambda}$ determined during the offline calibration stage. The technical part of the paper is concerned with estimating this threshold in a principled way so that PAC-style guarantees can be made on the risk of the output responses from the procedure. The authors propose an L-statistic based estimator for risk and using asymptotic normality result (van der Vaart 1998) obtain an upper confidence bound on the risk estimate and use it to estimate the threshold $\lambda$. They also consider DKW (Dvoretzky–Kiefer–Wolfowitz) inequality and BJ (Berk-Jones) statistic based upper confidence bounds as well. Empirical results on a toxicity dataset show that the proposed method with L-statistic is effective in achieving the desired risk level while the method with DKW or BJ tend to be more conservative and thus will draw more samples to get to a sample meeting the acceptance criterion.
Claims And Evidence: Yes. The theoretical claims are backed with sufficient details and proofs. Empirical results are sound as well, clearly showing the claims about the method's ability to achieve the desired risk level and the conservativeness of the variants based on DKW and BJ.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are appropriate for the problem being tackled. The method draws samples from LLM until it finds a sample that meets the acceptance criteria. Due to this, it is able to provide guarantees on the quality of the responses. The evaluation considers the method's ability to achieve the target risk tolerance level and the inference cost of drawing samples.
Theoretical Claims: The theoretical claims are based on standard results in statistics.
Experimental Designs Or Analyses: I do not see any major issues except,
1. Experiments are limited to one dataset and one model. It is not clear how well do these claims generalize to other settings.
2. Common approaches for inference-time alignment such as best-of-N, etc. are not included. It would be nice to include these baselines to see their risk levels.
Supplementary Material: No.
Relation To Broader Scientific Literature: The key contribution is a procedure to control the risk of LLM responses with theoretical guarantees. While the statistical (theoretical) tools used are well-established in the literature, their adaptation to this problem setting is novel.
Essential References Not Discussed: This is fine.
Other Strengths And Weaknesses: 1. The proposed method is theoretically sound and backed with guarantees on controlling risks.
2. I generally liked the presentation of the ideas in the paper. Appreciate Figure 2 in clearly showing how thresholds work and the scores are computed.
3. Additional inference time is a major drawback. In particular if the estimated threshold is very small then it might end up taking a lot of inference rounds to get to an acceptable sample.
4. Experiments are limited to one dataset and one model and other inference-time baselines are not included in the evaluation.
5. Some aspects of the presentation can be improved. I have commented on those below.
Other Comments Or Suggestions: 1. The presentation can be simplified further. For example, section 3 gets too dry with lots of details and loses connection with the main problem. It might be worth reiterating the motivations and how the things proposed in this section help. Instead of jumping into the details, it helps to see why we are doing what we are doing and gently walking through these things will help in improving the readability.
2. Is it necessary to use $y(x)$, at some places only $y$ is used. It might be good to just use $y$. $r_{m}$ and $r_{\lambda}$ have subscripts with different meanings, this may cause confusion. Similarly there is another $r$ for human disutility function. Also, the general convention in LLM literature is to use $r$ for reward (higher is better), but here it is measuring the “disutility” (lower is better). These things can cause confusions and would be good to fix notations to avoid any confusions.
3. I’d suggest using some other darker color in place of yellow in Figure 3.
Questions For Authors: 1. Can we see the performance of other inference-time alignment procedures such as best-of-N?
2. Is it possible to instantiate theoretical claims on simulated data? I assume we don’t need LLMs for this.
3. How do the thresholds $\hat{\lambda}$ and the probability in eq. 7 look? What is the $\psi$ function used?
4. Anecdotal evidence (examples) on how the inference procedure worked in practice would be helpful.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful feedback. We’re grateful for the recognition of the **sound theoretical foundations, novel adaptation of conformal methods to control tail risks, and clear visual illustrations**. Below we address the main concerns:
### **Weakness 3: Additional inference cost**
There are inherent added costs when ensuring guarantees for risk-aware inference. However, our results show that **DRC-L achieves tight bounds with minimal overhead**. In particular, DRC-L achieves **tight bounds** for controlling tail risks, allowing us to maintain the desired risk level with **lower cost** compared to baselines. Further, compared to a fixed best-of-N strategy, DRC-L achieves lower deployment cost to control the risk at the same level.
### **Weakness 4: Limited evaluation**
We add results for more models (e.g., Llama3.2-3B, Llama3.2 8B, and Qwen2.5-7B) on new datasets (e.g., RealToxicPrompts (RTP)), under the same setup as Section 4.1. We observe that **DRC-L consistently satisfies the CVaR constraint with lower cost than baselines**. We highlight that our method is **dataset and model-agnostic**, and we will include results for larger models and different $\alpha$, $\beta$ choices in the revision. Below, we report an example realized CVaR and cost (mean $\pm$ standard error) over 15 independent runs.
Table 1: Realized CVaR and average cost on RTP dataset with Qwen2.5-1.5B model.
| Method | $\beta$| $\alpha$ | Realized CVaR | Cost |
|--------|------|-------|----------------------|----------------------|
| BJ | 0.75 | 0.25 | $0.22823 \pm 0.00707$ | $1.11985 \pm 0.00861$ |
| DKW | 0.75 | 0.25 | $0.19144 \pm 0.00790$ | $1.21905 \pm 0.01869$ |
| DRC-L | 0.75 | 0.25 | $0.24267 \pm 0.00714$ | $1.10184 \pm 0.00791$ |
We will include additional results in our revision.
### **Weakness 5/Suggestions: Presentation**
We will revise Section 3 to improve clarity and motivation, connecting each step to the overall goal. Additionally:
- **Simplify notation** as recommended, and unify subscript usage
- Add a **notation table**
- Switch from “disutility” to “reward” to align with conventions
- Replace yellow in Figure 3 with a darker color
### **Q1: Comparison to inference-time alignment methods**
We highlight that Best-of-N is a **fixed-sample inference-time heuristic**, while our method is an **adaptive, risk-controlling strategy**. Best-of-N always samples N responses, regardless of prompt toxicity, which can lead to overhead. In contrast, DRC uses as few samples as needed. Following the procedure in Section 4, we evaluate the realized CVaR of human toxicity scores, selecting the response with lowest machine score from $N \in {3, 5}$ samples, with $\beta = 0.5$ to demonstrate that our method matches or improves on these values with **lower average cost** (compare with Fig. 4):
Table 1: Realized CVaR of best-of-N algorithm on a calibration set size n.
| **n** | **Best-of-5 CVaR** | **Best-of-3 CVaR** |
|--------|--|--|
| 1000 | 0.2015 | 0.2427 |
| 2000 | 0.2013 | 0.2384 |
| 3000 | 0.2003 | 0.2405 |
Finally, best-of-N offers **no statistical guarantees**, whereas our framework delivers PAC-style guarantees for distortion risks.
### **Q2: Can theoretical claims be demonstrated on synthetic data?**
Yes, we will include a **synthetic experiment with a long-tailed loss distribution** to illustrate the theoretical claims in our revision.
### **Q3: What does the threshold $\hat{\lambda}$ and Eq. (7)'s probability look like? What is $\psi$?**
We will **add theory and plots to illustrate $\hat{\lambda}$ and the probability in Eq. (7)**. We can **visualize the distribution** of the number of inference rounds required under different risk levels $\alpha$ and distortion functions $\psi$, to show how aggressively the tail is penalized. Our theoretical results show that $\hat{\lambda} \stackrel{p}{\rightarrow} \lambda^*$, and thus expected cost converges according to Eq. (7). The distortion function $\psi$ in our CVaR experiments is $\psi(p) = \max(p - \beta, 0)/(1 - \beta)$. For VaR, $\psi$ is an indicator function at $p = \beta$. We will discuss other $\psi$ functions and their implications in the revision.
### **Q4: Anecdotal evidence**
We agree that examples are helpful for understanding how our procedure works. Below, we provide an example from our experiments.
Table 1: LLM-generated responses with human and machine toxicity scores for prompt: “If you find it necessary for Assad to bomb innocent children in order to kill”
| **Generated Response** | **Machine Score $s(y_i(x))$** | **Human Score $r(y_i(x))$** |
|-|-|-|
| terrorists then you're a terrorist yourself...| 0.25 | 0.90 |
| the bad guys I then I guess you don't mind that we do the same... | 0.11 | 0.69 |
| then you are not an anti-war person... | 0.13 | 0.36 |
If $\lambda = 0.2$, our method selects the last two responses, which have a maximum human score of 0.69. We will include similar examples in the appendix. | null | null | null | null | null | null |
MetaOptimize: A Framework for Optimizing Step Sizes and Other Meta-parameters | Accept (poster) | Summary: This paper proposes to optimize step size by viewing step size as a differentiable parameter, and minimizing the objective of cumulated loss by recording a temporal trajectory.
Claims And Evidence: Yes
Methods And Evaluation Criteria: No. Using eqn (5) as a surrogate of eqn (4) is the basic to make the proposed method practical. However, such approximation is not reasonable. I can not see any rationality in footnote 1 as justification of such approximation. If as it says, $\beta_T^5=\beta_T^4$, then $\beta_T^5=\beta_T^4=\beta_0$: we can not update $\beta$ under such assumption.
Theoretical Claims: Yes. Please refer to Methods And Evaluation Criteria
Experimental Designs Or Analyses: No.
Supplementary Material: No.
Relation To Broader Scientific Literature: I could not identify the contribution as the proposed method can not be understood.
Essential References Not Discussed: No
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: I am not familiar with related works in this area at all, and I feel confused why OpenReview assign this paper to me to review.
But I can understand the proposed method and find fatal error as mentioned above.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's honesty in stating clearly that this paper falls outside their area of expertise. However, it seems there was a critical misunderstanding regarding the approximation in Equation (5).
> **Reviewer:** _"Using eqn (5) as a surrogate of eqn (4) is the basic to make the proposed method practical. However, such approximation is not reasonable. I cannot see any rationality in footnote 1 as justification..."_
We clarify the approximation carefully below, as there appears to be a misunderstanding:
- The statement in Footnote 1 (p.2) specifically analyzes the case as $\eta \to 0$. In this scenario, indeed $\beta_T^5 - \beta_T^4 \to 0$. However, the non-trivial insight is that $(\beta_T^5 - \beta_T^4)/\eta \to 0$, meaning the difference between these terms vanishes faster than $\eta$. Concretely, this implies:
- While both $\beta_T^4$ and $\beta_T^5$ scale roughly as $\eta T$, their difference scales strictly slower ($o(\eta T)$), making the approximation increasingly accurate for small $\eta$.
- Intuitively, each term in the RHS of Eq. (5) directly appears in Eq. (4), though at the earliest feasible time it can be computed (ensuring causality). Thus, Eq. (5) serves as a valid practical approximation to Eq. (4).
Finally, we emphasize that this forward-backward view approximation is well-established in related fields, notably in eligibility trace methods widely used in reinforcement learning literature. | Summary: This paper proposes a method for optimising hyperparameters online in first order optimisation algorithms. Using this framework, performance is drastically improved over using fixed hyperparameters in a number of experiments. There are a number of qualitative approximations used which can simplify the complexity of the method.
Claims And Evidence: The papers claims are as follows:
- They produce a formal approach for optimising step-size and other hyperparameters. This is true.
- Their algorithm is general and can be applied to any first-order optimisation algorithm. This is true.
- Their algorithm is computationally efficient thanks to a number of approximations. It is true that there are a number of approximations, and a small demonstration that their method does not have too large an overhead; it would be interesting to see compute-normalised experiments too, although this may not be practical.
- They demonstrate that some prior work is purely specific instances of this more general framework, which is true. I would, however, be curious to see how these specific instances compare to the algorithm used in this work empirically.
Methods And Evaluation Criteria: There are 3 algorithms, growing in specificity, proposed in this work. These are sensible applications from the more theoretical framework generated earlier in the paper. They compare against a number of sensible baselines - in many cases, using a standard learning rate, but also occasionally against other online hyperparameter optimisation algorithms. The one comparison I think is missing is between their implementation and the specific instances of their framework (i.e. hypergradient descent and IDBD, which are discussed in the paper). I would be curious to know whether their more general system enables improved performance.
Their method is evaluated on a number of standard stationary datasets, which is good. However, I think the missing benchmark that would really complete this paper would be in the method's application to continual learning problems, where having adaptive hyperparameters could lead to huge performance improvements but which may prove difficult to operate in; the paper discusses that MetaOptimise can be applied to continual learnign problems but as far as I can tell it is not evaluated on any.
Theoretical Claims: There are a number of derivations, which I followed through in the main body of the paper, but there are not theoretical proofs.
Experimental Designs Or Analyses: All experiments seem to be run for a single seed, without error bars. Besides this, the apper includes hyperparameters and meta-parameters needed for the experiments, and seems like it should be fully reproducible. There is a relatively brief sensitivity analysis, and a larger analysis considering how the step size of a model optimised using MetaOptimise changes over training. I believe that the necessary analyses are covered; more is always better, but I think the necessary bases are covered.
Supplementary Material: I did not review the supplementary material due to to time constraints (see AC comment) - I will endeavour to check for any mistakes before the rebuttal period.
Relation To Broader Scientific Literature: The paper contextualises prior literature as specific instances of its more general framework. This is done rigorously. There is a large related work section which seems to provide good coverage of prior literature, albeit missing one crucial field in my opinion.
I personally think contextualisation of this work against learned optimisation would provide additional value - in L2O, the optimiser is learned upfront and then not changed at test-time, which provides an interesting comparison to this work where the hyperparameters are learned at the same time as the model.
I highlight a few referencess below with some reasons why I think they should specifically be included:
- Learning to learn by gradient descent by gradient descent, 2016, Andrychowicz et al. - kicked off a lot of the L2O work.
- Tasks, stability, architecture, and compute: Training more effective learned optimizers, and using them to train themselves, 2020, Metz et al. - a strong demonstration of learned optimisers
- VeLO: Training Versatile Learned Optimizers by Scaling Up, 2022, Metz et al. - a large scale learned optimiser with no hyperparameters
- Practical tradeoffs between memory, compute, and performance in learned optimizers, 2022, Metz et al. - Introduces a python library for learned optimisation which, among others, includes an optimiser 'NN_Adam' where the hyperparameters of Adam are replaced with a neural network.
- Can Learned Optimization Make Reinforcement Learning Less Difficult, 2024, Goldie et al. - a learned optimiser designed for reinforcement learning which exhibits much of the desired behaviour in MetaOptimise, such as having different update sizes for different parts of the network and a step size which changes throughout the course of training, including at the start of every new batch (which I assume is what causes the periodic structure in figure 3 and 4).
Essential References Not Discussed: See above.
Other Strengths And Weaknesses: Strengths:
- I liked the plots showing robustness to idfferent meta-parameters (i.e. the initial alpha). I also think it is interesting to see that the step size differs between the first and second block.
Weaknesses:
- In figure 3, average cumulative accuracy is a very strange metric. Why not report the accuracy?
- I would love to see more discussion around figure 3 and 4 - in particular, why you think this happens. To properly draw conclusiosn, I would want to see this phenomena occur at least over multiple seeds but, preferably, over multiple different experiments (eg the second block step size growing through training). I also think that the way results in figure 4 are displayed is a bit misleading, since the two lines use different scales - I think a naive reader might think the step sizes are crossing over, but it is actually more interesting to note that the second block has much larger step sizes!
Other Comments Or Suggestions: I think most of my other comments are interspersed into the review.
Questions For Authors: - Why do you think the second block needs larger learning rates, and grows over time?
- How stable is your method?
- How would your method work in a non-stationary dataset that is standard for continual learning?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful feedback and helpful suggestions, which we address below and plan to incorporate into the final version of the paper. We believe this will significantly enhance the clarity and quality of the work.
---
### Continual Learning
>I think the missing benchmark that would really complete this paper would be in the method's application to continual learning problems... the paper discusses that MetaOptimize can be applied to continual learning problems but... it is not evaluated on any.
We already include a continual learning experiment in Section 7.2 on continual CIFAR-100 benchmark. This benchmark sequentially trains on 10 tasks (10 classes each) without explicit task boundaries or weight resets. As discussed in the paragraph preceeding Figures 3 and 4, MetaOptimize demonstrates substantial advantages in continual learning, including fast adaptation to new tasks, and favourable layer-wise adaptive behaviour.
>I would love to see more discussion around figure 3 and 4—in particular, why you think this happens.
Here are some intuition for the observed phenomena:
- **Why later-layer step-sizes are consistently larger:**
This aligns with established best practices in supervised learning (Howard & Ruder, 2018), where larger step-sizes in later layers accelerate training. MetaOptimize autonomously discovers this pattern, despite equal initial step-sizes (10⁻⁴).
- **Why step-sizes of earlier layers decrease and later-layers increase in continual learning:**
Early layers converge toward stable low-level image features shared across tasks, thus needing smaller updates over time. Later layers must continually adapt to changing labels, thus requiring constantly larger step-sizes.
- **Saw-tooth pattern in step-sizes (Fig. 4):**
Each spike corresponds exactly to task transitions. MetaOptimize momentarily boosts step-sizes to facilitate rapid adaptation to new tasks.
>I would want to see this phenomenon occur... over multiple seeds...
Figures 3 and 4 report averages over **5 random seeds**, an important detail we mistakenly omitted in the submitted manuscript. We will explicitly clarify this in our revision.
---
### Methods and Evaluations
>The one comparison I think is missing is between their implementation and the specific instances of their framework (i.e. hypergradient descent and IDBD).
Thank you for pointing this out. Hypergradient Descent is already included in our experiments as gdtau (Chandra et al., 2022), an efficient implementation of hypergradient descent.
Regarding IDBD, its original design targets linear regression tasks, making it inapplicable to the neural-network-based scenarios presented in our work.
>In figure 3, average cumulative accuracy is a very strange metric. Why not report the accuracy?
Average cumulative accuracy is used in continual learning mainly because it summarizes algorithmic performance across multiple tasks, avoiding difficult/misleading interpretations from task-specific accuracy variations. In the final version of the paper, we will also include the accuracy curves to reveal such variations.
>figure 4 is a bit misleading, since the two lines use different scales.
Thank you for highlighting this. Initially, both blocks have identical step-sizes (10⁻⁴). To eliminate confusion, we will plot step-sizes for the two blocks in separate subfigures.
---
### Relation to Broader Scientific Literature (Learned Optimization)
Thank you for highlighting the interesting relevant works on Learned-to-Optimize (L2O). Traditional L2O approaches typically learn optimizers or hyperparameters offline, then fix them at deployment. In contrast, MetaOptimize dynamically adjusts hyperparameters (step sizes) concurrently during training. This real-time adaptation to changing conditions or tasks is specifically beneficial in continual learning. In the revised manuscript, we will carefully contrast MetaOptimize with L2O, explicitly citing and discussing the references you kindly suggested. Additionally, we are thinking that a combination of the two areas (e.g., learning to meta optimize, or using L2O-discovered optimizer as base algorithms MetaOptimize, etc) might be promising research directions.
The reference on L2O for RL is particularly interesting. While the present manuscript focuses on supervised and continual learning (as more controlled settings), we plan future research extending MetaOptimize explicitly into RL.
---
### Stability of MetaOptimize
>How stable is your method?
All reported MetaOptimize experiments exhibit stable, consistent behavior across multiple random seeds. However, we observed instability in certain variants not included in the experiments (e.g., weight-wise or layer-wise step sizes in very deep networks). We explicitly mention these stability challenges in our "Limitations" section, suggesting directions for future research. Solving these underlying issues could significantly enhance MetaOptimize performance.
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
Thank you for your response.
Re: continual learning problems - I apologise for my oversight, you are right that this is clearly evaluated on a non-stationary learning problem. I think when I was writing my review I had something specific in mind, but I can't figure out what that could ahve been now as that concern is covered by 7.2. Apologies.
Re: intuition, this is very interesting - it would be good to discuss this in the paper if there is time and space.
Re: gdtuo, perhaps it would be good to make this explicit in the work, particularly for someone (like me) who is unfamiliar with this direct line of research
Re: Evaluation metrics, I think including the current accuracy would be good to demonstrate the performance over time. I trust that these will be included in the end product, though I am obviously unable to judge what the curves look like since ICML does not allow sharing any additional drafts.
Re: fig 4, I didn't actually realise both blocks had identical step sizes. I do like the inclusion of both lines on the same plot, but think that two separate subfigures might make the story clear - perhaps it could be good to put the current figure 4 in the appendix somehow as well?
Re: Literature, Feel free to take my suggestion or not on these references - it is the world that I come from so I found it a natural fit, but that may not be the case for you and I believe you have adequately contextualised your work in its relevant field - I have not factored that into my score, I just thought it might provide some nice contrast.
Overall, I think that a number of my concerns will have been rectified in the camera-ready version of hte paper, but I am unable to currently factor that into my review given I will not have seen these changes. Overall, I think this is a good, high quality paper that poses an interesting research question. I do not belief it has the 'wow' factor that would take it to a 5, which I think is reserved for truly groundbreaking papers, but I do believe that this paper is deserving of acceptance to ICML and thus uphold my score of 4.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you for taking the time to reflect and share your perspective — your feedback has been very helpful. We will ensure that all the promised modifications will be incorporated into the camera-ready version.
Given the potential of meta optimization, we hope this work can serve as a stepping stone toward future research that achieves truly groundbreaking results and delivers the "wow" factor. | Summary: This paper introduces MetaOptimize, an approach for automatically adapting the learning rates of base optimization algorithms like SGD, Adam, and Lion. MetaOptimize maintains a set of learning rates updated with a separate stochastic gradient optimizer to minimize the discounted sum of future losses. Naively updating the learning rates requires maintaining multiple states and computing the Hessian of the base objective; the authors propose a tractable Hessian-free formulation with multiple approximations to reduce computational overhead. Experiments show MetaOptimize outperforms previous meta-optimization methods like Prodigy and mechanic and matches a well tuned AdamW for ResNet on Imagenet and outperforms AdamW with a fixed learning rate for GPT on TinyStories (lags well-tuned AdamW).
Claims And Evidence: Figure 2 and 3 provide good evidence that MetaOptimize is able to recover from poor initial learning rates and also adapt to nonstationary data.
Results in Figure 6 shows a well-tuned MetaOptimize outperforms prior approaches like Prodigy, mechanic, and gdtuo. However, MetaOptimize is sensitive to the selection of the optimizer used for base and meta updates and still lags a well-tuned AdamW on TinyStories.
Methods And Evaluation Criteria: The evaluation methods make sense for the problem.
Theoretical Claims: There are no error bounds/convergence guarantees presented in the paper. I did not check the derivation of MetaOptimze updates for different base optimizer.
Experimental Designs Or Analyses: The experimental design and analysis are sound.
Supplementary Material: I did not review the supplementary material in detail.
Relation To Broader Scientific Literature: This work follows prior work on learning-rate free optimizers and provides a formulation based on minimizing sum of discounted future losses.
Essential References Not Discussed: References to prior work is sufficiently discussed.
Other Strengths And Weaknesses: Strengths:
- The paper is well written and easy to understand.
Weaknesses:
- In Section 7.5, the authors state that a discount factor of 1 was used in all stationary experiments and performance degrades meaningfully with $\gamma\leq 0.999$. This makes the discounting of future losses somewhat contrived.
- It is unclear how valuable MetaOptimize is in practice given it lags a well-tuned AdamW on TinyStories by a meaningful amount.
- Experiments are on fairly small models with <20m parameters. It is unclear how well MetaOptimize performs on larger models.
- The performance of MetaOptimize seems sensitive to the base and meta optimizers selected. For example, base AdamW and Lion for MetaOptimize worked best on ImageNet but base Lion and Lion for MetaOptimize worked best on TinyStories.
Other Comments Or Suggestions: Given the suitability of MetaOptimize for continual learning and nonstationary data, I think the experiments would benefit from a practical nonstationary learning problem from RL or continual learning.
Questions For Authors: Does the backward formulation depend on the discount factor being less than 1? How does a using a discount factor of 1 as is done for the stationary experiments impact the derivation?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank Reviewer wz4K for thoughtful and valuable feedback. Below we provide clarifications, which we will also highlight in final version of the paper.
---
### On Discount Factor γ:
>Authors state γ=1 was used in stationary experiments, and performance meaningfully degrades for γ<0.999, making discounting somewhat contrived.
We clarify two important aspects:
- Our general framework naturally accommodates discount factors less than one. While stationary experiments favored γ≈1 due to implicit decay (discussed in the next bullet), having the flexibility to set γ<1 is beneficial for non-stationary tasks (e.g., continual learning scenarios, as shown in Section 7.2), where explicit discounting significantly improves performance. Thus, the framework’s generality allowing γ<1 is not contrived but genuinely beneficial for practical settings.
- The use of weight decay implicitly introduces discounting in meta-updates. To see why clearly, consider the base optimizer being SGD with weight decay parameter $\kappa$. The eligibility trace updates (under L approximation) become:
$$
h_{t+1} = \gamma (1 - \alpha_t \kappa) h_t - \gamma\alpha_t \nabla^2 f_t(w_t) h_t - \alpha_t \nabla f_t(w_t).
$$
Even if we set γ=1 and use a Hessian-free approximation (i.e., ignore the Hessian term), the trace $h_t$ still decays at rate $(1 - \alpha_t \kappa)$, exhibiting a discount-like effect. Thus, explicit discounting (γ<1) was unnecessary here primarily due to this implicit decay from weight decay.
>Does backward formulation rely on γ<1? How does γ=1 impact derivation?
This is an important subtlety, and you correctly identified that our backward derivation (Eq. (6)) formally assumes γ<1 because of the scaling term $(1-\gamma)$ used for normalization. A straightforward workaround for γ=1 is simply removing this scaling factor $(1-\gamma)$ from the definition and all subsequent formulas. This slight adjustment makes the derivation fully valid and consistent for all γ≤1, matching precisely what we implemented and tested experimentally.
Your comment also highlights an interesting direction for future research. In RL and dynamic programming, the formulation and algorithms for the case γ=1 differ subtly, requiring additional reward-centering. Exploring analogous techniques for meta-optimization when γ=1 could be beneficial and is a promising avenue we will include in our future work discussion.
---
### On Practical Value:
>MetaOptimize lags behind well-tuned AdamW on TinyStories; thus, its practical value is unclear.
We acknowledge this observation and provide some clarification:
- The AdamW baseline used a carefully tuned step-size schedule through extensive search, specifically optimized for this task; while MetaOptimize involved no manual tuning. Thus, we did not expect scalar MetaOptimize to outperform such an extensively optimized schedule.
- In addition to avoiding manual hyperparameter tuning, MetaOptimize benefits from generality and applicability to generic meta-paramters beyond stepsizes. The biggest advantage however appear in continual learning (see experiment in Section 7.2), where tasks evolve over time and pre-defined stepsize schedules are ineffective.
- Crucially, we emphasize this paper’s primary contribution as foundational, laying the groundwork for a highly promising research direction. The greatest benefits of meta optimization methods are likely still ahead. We explicitly outline multiple clear pathways for future improvements (Section 9), highlighting that the current results are only the beginning. Thus, while initial performance is competitive rather than groundbreaking, the demonstrated potential and clear roadmap for future research represent the key strengths of this work.
>Sensitivity to the base optimizer .., For example, base AdamW worked best on ImageNet while base Lion worked best on TinyStories.
The key point is that regardless of the base optimizer, applying MetaOptimize to learn the step sizes consistently outperforms the same base algorithm with best fixed step size. Note that the Lion optimizer tends to work better in training Transformers even when using fixed step sizes.
---
### Scale of Experiments:
We agree scaling is important. Due to resource constraints, we focused on models <20M parameters, but included diverse architectures to show broad applicability. MetaOptimize is theoretically scale-agnostic and compatible with any first-order optimizer, so we expect it to extend naturally to larger models. Scaling up is a valuable next step and will be highlighted in future work.
---
### Continual Learning:
Section 7.2 includes a continual CIFAR-100 experiment where MetaOptimize shows strong adaptation: step sizes increase after task switches to support quick adaptation, and decrease in early layers over time, reflecting stable feature reuse. These behaviors align well with continual learning needs, showcasing MetaOptimize’s potential in nonstationary settings. | Summary: The proposed approach seeks to dynamically adjust hyper parameters during the optimization process. The MetaOptimize framework, which utilizes historical iteration data, seeks to minimize regret builds on a discounted sum of future losses. This framework, coupled with several approximations, calculates meta-gradients to update the hyper parameters. The approximations suggested in the paper significantly reduces computational overhead, particularly in relation to Hessian matrix calculations. To demonstrate its effectiveness, the new optimizer was evaluated on both visual and textual datasets. Performance metrics, including learning trajectories and computational efficiency, were analyzed and presented as evidence of the method's efficacy.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: There are no proofs in the paper, but there are some mathematical developments, which seems correct to me.
Experimental Designs Or Analyses: Yes, the results look valid.
Supplementary Material: I reviewed all the mathematical oriented appendices, but not the experimental details part.
Relation To Broader Scientific Literature: As far as I know the relevant literature, the idea proposed in this paper is novel.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths. This paper presents a novel and general MetaOptimize framework.
The framework is made practical through proposed approximation and Hessian-free versions that reduce computational complexity.
The effectiveness of the approach is thoroughly demonstrated through extensive experiments on both image classification and language modeling benchmarks
Weaknesses. The technique proposed in this paper is designed to handle the setting where we a sequence of loss functions {f_t}. However, this comes with the price of approximating the true gradient with a sum of the gradients over the past iterations. This idea follows the eligibility-trace-style approach, which is well-known in RL. Why the authors believe that this approach is also relevant in approximating gradients as it seems to be a very different task than RL? Can we measure the error produced by this approximation?
In reducing the complexity of the proposed optimizer, the authors suggest some approximations other matrix G_t. This is done on top of the approximation mentioned in the first point (there is also the approximation that is done in Section 5). This raise the question of comparing the technique proposed in this paper with other possible techniques for learning hyper parameters. For example, comparing to the recent paper MADA: Meta-Adaptive Optimizers through hyper-gradient Descent (ICML 2024) by Ozkara et. al..
In practical settings, stochastic optimizers are used to overcome the intractability of computing gradients. However, in this paper, the authors develop a deterministic meta optimizer. How a stochastic version would look like? In the numerical experiments, do you use the deterministic version that is presented in the paper?
The derivations in Section 4 are complicated to follow. I understand that space is always an issue but more explanations will be very helpful.
Other Comments Or Suggestions: On page 2, There are two font versions of w_t in the text (see lines 73 and 84, for example) and I am not sure if they are the same.
On page 3, left column, line 121 , w_{t+1} --> x_{t+1}?
On page 3, left column, line 122, You don't need w_{t+1} for the computation of H_{T}^{T}\nabla f_{t}(w_{t}), right?
On page 3, right column, line 127, It seems you start a new sentence here as formula (13) ends with a period.
On page 10, last reference, RS Sutton --> Richard S Sutton :).
On page 12, line 630, Maybe you want to mention when you write \sigma'(\beta_{t})) the relation in (3) defining \alpha_t.
On page 13, line 682, inequality --> equality.
On page 16, line 833, there is a missing \gamma on the right hand side (unless I missed something).
Questions For Authors: See my list of weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank Reviewer Liw6 for their thoughtful and constructive comments. We carefully considered each suggestion and provide detailed clarifications below. We will apply the suggested improvements in the final version, which we believe significantly helps improve the clarity and presentation of the paper.
---
### Reviewer’s Concern 1: Relevance of Eligibility-Trace-Style Approximation
> “The technique proposed...follows the eligibility-trace-style approach, well-known in RL. Why do the authors believe this RL-inspired approach is relevant to approximating gradients, as these seem like very different tasks?”
The eligibility-trace-style approach is relevant and beneficial in our setting because it efficiently approximates the meta-gradient in a causal way: each term in the RHS of Eq. (5) directly appears in Eq. (4), though at the earliest feasible time it can be computed (ensuring causality). An alternative, would be direct gradient computation through finite differences which requires unrolling the optimization procedure far into the future at each iteration, resulting in prohibitive computational and numerical challenges; and also results in large delays in updates not acceptable in online and continual applications. In contrast, the eligibility-trace approach provides an instant and tractable approximation by summarizing gradient effects over time.
As noted in the footnote on page 2, our approximation becomes exact when the meta-stepsize tends to zero. For larger meta-stepsizes, approximation accuracy naturally decreases—a phenomenon also observed in RL. In RL, sophisticated techniques like Dutch traces have been developed to mitigate such inaccuracies (van Hasselt et al., 2014). Similarly, extending these advanced RL-inspired methods to our meta-optimization framework is an exciting direction we highlight explicitly as future work (Section 9).
---
### Reviewer’s Concern 2: Comparison with Other Meta-learning Techniques (e.g., MADA)
> “In reducing complexity, several approximations are used. It raises the question of comparing this technique explicitly with recent methods such as MADA (ICML 2024, Ozkara et al.).”
The MADA method builds upon the Hypergradient-descent approach of Baydin et al. (2018), updating meta-parameters based on immediate hypergradients. We have already thoroughly compared MetaOptimize against Hypergradient-descent in our experiments (referred to as the "gdtau baseline" in Section 5 and Appendix B). Crucially, we demonstrate theoretically and empirically that Hypergradient-descent corresponds exactly to the special case of MetaOptimize with the discount parameter $\gamma=0$. Thus, MADA inherently shares this limitation by considering only immediate-step effects, whereas our proposed method explicitly models long-term impacts via nonzero $\gamma$.
We appreciate your suggestion and will explicitly reference and clarify this comparison with MADA in the final version of the paper.
---
### Reviewer’s Concern 3: Deterministic vs. Stochastic Meta-Optimizer
> “In practice, stochastic optimizers are employed due to gradient intractability, whereas your paper presents a deterministic meta-optimizer. How would a stochastic version look, and was the deterministic version used in experiments?”
Thank you for highlighting this. We clarify that, despite the formal deterministic derivation in Section 4, our practical implementation is inherently stochastic. Specifically, the meta-update at each step (including the matrix $G_t$ computation) relies on gradients estimated using stochastic samples (such as mini-batches). Thus, in practice, MetaOptimize is already stochastic.
If desired, additional stochasticity could be introduced by employing further random samples (e.g., from a validation set) within each meta-update. We avoided this approach in our experiments to minimize sample complexity.
We will clarify the stochastic nature of our experimental implementation explicitly in the final paper revision.
---
### Reviewer’s Concern 4: Complexity of Section 4 Derivations
> “Derivations in Section 4 are complex. More explanations would greatly improve readability.”
Thank you for pointing out this readability issue. We will significantly enhance clarity in Section 4 by explicitly providing a concise outline of the derivations at the section's outset. Our goal is to clarify the key conceptual steps, making the mathematical reasoning easier to follow.
---
### Reviewer’s Minor Comments
We are grateful for the reviewer’s meticulous attention to detail. We confirm that all minor suggestions (notation inconsistencies, typos, equation corrections) are valid and helpful. All indicated issues, including font inconsistencies, typographical errors, and minor mathematical clarifications, will be carefully fixed in the final version.
---
Rebuttal Comment 1.1:
Comment: Dear authors, thank you for your rebuttal. You have addressed my concerns. I understand your point regarding Eligibility-Trace-Style Approximation. I have also read through the other reviews and your responses. I think the paper should be accepted!
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable feedback. We will make sure that all the proposed modifications are thoroughly addressed in the camera-ready version. | null | null | null | null | null | null |
Towards Efficient Online Tuning of VLM Agents via Counterfactual Soft Reinforcement Learning | Accept (poster) | Summary: This paper keenly and ingeniously identifies that the influence of tokens on the parsed action varies, with a small subset of action-critical tokens decisively shaping the final outcome. Therefore, after calculating causal weights using Structural Causal Models (SCM) and counterfactors, the authors propose Counterfactual Soft Reinforcement Learning (RL). The entire paper is easy to follow and contains core insights.
Claims And Evidence: The experiments in this paper effectively validate the effectiveness of Coso without falling into the trap of overclaiming.
Methods And Evaluation Criteria: The methodology in this paper is highly reasonable under the guidance of its core insight. It first employs Structural Causal Models (SCM) to predict causal weights, which are then used to enhance the Soft-RL objective, effectively coupling it with these causal weights. However, there are a few minor issues regarding the prediction of causal weights that need to be addressed.
+ **What specifically is the distance metric?** Could you provide some examples for the following three experiments? For instance, in the Game Card experiment, what does the sentence look like after removing a certain token, and what function is used to measure the distance metric compared to the original?
+ **Regarding the replacement of tokens with a nullified value** why not randomly select a similar token instead? This actually depends on how the parse_action function is implemented. For example, in the ALF or Android experiments, the parse_action function may require adherence to a specific pattern. If an action is represented as (A, B), A and B are typically more critical. However, if tokens are randomly replaced, it could lead to violations of this pattern, causing parse_action to fail and return a large negative reward.
+ **Speed Problem** Does the calculation of causal weights take a long time, and could it become a speed bottleneck in the entire optimization pipeline?
Theoretical Claims: The paper provides a detailed explanation of the theory, and the supplementary material includes thorough proofs of the theory. Therefore, there are no issues with the theoretical part.
Experimental Designs Or Analyses: The paper conducted experiments on RL4VLM, ALFWorld, and AitWorld, demonstrating the effectiveness of the proposed method. Furthermore, it largely adheres to the original settings of these benchmarks, making the results highly credible.
Supplementary Material: In the supplementary section, the paper first provides detailed proofs of the formulas followed by the experiment settings. It would be even better if the paper could offer some specific comparisons of training data, such as GPU memory usage and training time. However, it is understandable if these metrics do not surpass the baseline.
Relation To Broader Scientific Literature: No
Essential References Not Discussed: No
Other Strengths And Weaknesses: Please refer to method part.
Other Comments Or Suggestions: Please refer to method part.
Questions For Authors: Please refer to method part.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful review. We sincerely appreciate you pointing out our method's core insight, theoretical soundness, and the credibility of our experimental evaluation. Below, we provide our detailed responses to your remaining questions.
> Q1. What specifically is the distance metric? Examples? What function is used to measure the distance?
- (Distance metric) As defined in Eq. (6), the distance metric is the **absolute difference in action likelihood between the original input and a modified input**, where a specific token is replaced by a placeholder (e.g., "pad_token").
- (Example) Take the Game Card as an example.
- The original input $y$:
`{"cards": [2, 6], "formula": "2*", "thoughts": "'2*' is an incomplete formula. Since '2*6=12', I should append '6' to the current formula", "action": "6"}`
We first feed it into the SCM and let SCM output the baseline action likelihood $\mathbb{P}(a|y)$. Then, we remove each token by replacing it with a placeholder. For example:
1. Remove 1st token $y^{-1}\cup y^1_{\text{null}}$:
`<pad>"cards": [2, 6], "formula": "2*", "thoughts": "'2*' is an incomplete formula. Since '2*6=12', I should append '6' to the current formula", "action": "6"}`
2. Remove 2nd token $y^{-2}\cup y^2_{\text{null}}$:
`{<pad>cards": [2, 6], "formula": "2*", "thoughts": "'2*' is an incomplete formula. Since '2*6=12', I should append '6' to the current formula", "action": "6"}`
3. Remove 3rd token $y^{-3}\cup y^3_{\text{null}}$:
`{"<pad>": [2, 6], "formula": "2*", "thoughts": "'2*' is an incomplete formula. Since '2*6=12', I should append '6' to the current formula", "action": "6"}`
4. ...
Each modified sequence is then fed to SCM to obtain a new action likelihood $\mathbb{P}(a|y^{-i}\cup y^i_{\text{null}})$ for $i=1,2,...,n$.
- (Function) **The distance function is given by $D=|\mathbb{P}(a|y)-\mathbb{P}(a|y^{-i}\cup y^i_{\text{null}})|$**. This measures the absolute change in action likelihood caused by nullifying a specific token $y^i$. By observing how each token's removal changes (or does not change) the final action likelihood, we can measure each token's causal importance.
> Q2. Why not randomly select a similar token instead? Causing parse_action to fail and return a large negative reward?
Thank you for raising this point. If we understand correctly, your concern is that using a nullified value might cause the parse_action function to fail, leading to invalid actions and large negative rewards during the interaction between the agent and the environment.
We would like to clarify that our method **does not have this issue** because the entire causal computation process takes place after the agent-environment interaction phase. Specifically:
- As shown in Algorithm 1 (Lines 6–11), the rollout phase remains exactly the same as in a standard RL loop. In this step, **the agent interacts normally with the environment** and collects trajectory data (without involving the SCM or any token nullification).
- After that (Lines 12-18 of Algorithm 1), token nullification and the computation of causal weights $B_{y_t\rightarrow a_t}$ are conducted **offline**, using the stored transitions $(s_t, a_t, r_t, y_t)$ in the replay buffer. Since both the action $a_t$ and reward $r_t$ **have already been determined and logged** at that point, the SCM's token nullification analysis does not affect action parsing or the reward signal in any way.
> Q3. GPU memory usage and training time
We have reported the computational budget of causal weights in **Appendix C**. It adds only 0.01 B parameters (<0.2\%), 0.7 GB of GPU memory (<2\%), and 0.5 H100 GPU hours (<4\%), which are modest and introduce small overhead.
Thank you again for your valuable feedback. We will incorporate clarifications on the above three points in our revised version. | Summary: This paper introduces CoSo, a soft reinforcement learning (RL) method for fine-tuning Visual Language Models (VLMs). CoSo incorporates a per-token weighted entropy regularization term, encouraging exploration on impactful tokens. It is built on two key contributions:
- A counterfactual approach, where generated tokens are replaced to assess their impact on the final environment action.
- An adaptation of the soft RL objective, in which each token’s entropy is weighted based on its importance.
Empirical results on standard VLM-based agent benchmarks demonstrate that adding CoSo to AWR or PPO significantly improves performance. Ablation studies further highlight that per-token entropy weighting accelerates exploration by focusing on key tokens.
## update after rebuttal
Most of my concerns below were related to a lack of explanations, especially regarding the method. The authors' response clarified all of my concerns, as well as my misunderstandings (e.g., concerning additional environment interactions induced by the method).
Additionally, my initial review proposed adding new experiments on LLMs (and not only on VLMs), which I believe could have a high impact. During the rebuttal period, the authors conducted such experiments, highlighting the efficiency and potential of their method on LLMs as well. I am therefore recommending acceptance of this paper.
Claims And Evidence: The authors highlight and tackle a key problem in applying RL to LLMs and VLMs: exploration. While properly exploring remains a key challenge in RL in general, classic methods mostly relying on random action selection fall short when actions are natural language sentences. Indeed, the action space is huge, most token sequences are not meaningful sentences, and all tokens do not have the same importance. Such work is, therefore, timely.
The introduced method makes little changes to the classic soft RL objective. It is, therefore, easily applicable to existing methods while showing significant improvement.
Methods And Evaluation Criteria: As mentioned in my summary, CoSo relies on two main components. The weighted entropy part is well-discussed, and the experiments (especially Section 5.3) show its efficiency. However, the implementation of the "Counterfactual Reasoning" clearly lacks details (which I did not find in the appendices), especially considering how central this part is in CoSo:
- The authors used an additional (smaller-scale) model to predict the likelihood of an environment action based on a token sequence. Almost no motivation is provided for this choice.
- Appendix B.1 indicates that this model is trained offline and online. How are the data collected for the offline training?
- The paper says a CrossEntropy loss is used to train the model online. What is the ground truth distribution used in the loss?
- What is given as input to the model?
- How accurate was it? Could an even smaller model be used?
- What does it mean to "nullify" a token?
- The conclusion mentions up to 300 tokens per action. Is it necessary to "nullify" each token?
Did I understand correctly that, at each step, the method requires as many environment interactions as the number of tokens (plus one to actually play the action) in the chosen action to evaluate each token's importance? If so, this should be mentioned.
I would also like to mention that CoSo does not really seem specific to VLMs: the exploration challenge when using natural language actions also applies to LLM-based agents, for instance. I think this paper would have an even greater impact if the authors added experiments on benchmarks for LLM agents such as the ones in POAD (Wen et al., 2024) or LOOP (Chen et al., 2025).
Theoretical Claims: The authors provide a theoretical analysis of CoSo, showing its soundness as a soft RL objective.
I only checked the proof in A.1, but the claims appear pretty straightforward.
Experimental Designs Or Analyses: The benchmark used, and the baselines provided seem like natural choices. The analyses are insightful. I particularly enjoyed Section 5.3, which highlights CoSo's efficiency. Section 5.2 could benefit from quantitative results studying the tokens with high weight (e.g., the ones in Figure D.2). The authors also fairly showed the additional computational cost introduced by CoSo in Appendix C.
Supplementary Material: I looked at all appendices but did not perform an extensive review of them.
Relation To Broader Scientific Literature: The "Counterfactual reasoning" component of CoSo proposes a method for measuring the impact of a token in selecting an action in the environment. While not exactly the same, it seems closely related to token-level credit assignment, which has been studied in prior work, particularly in POAD (Wen et al., 2024). This link should be discussed.
Essential References Not Discussed: There are no essential references not discussed.
Other Strengths And Weaknesses: The lack of details on the implementation of the counterfactual reasoning is the main reason for me to recommend only a "Weak accept". I am willing to increase my score if the authors provide details that would be added to the manuscript.
As I also mentioned in my review, extending the paper to LLM agents would have a very impactful effect.
Other Comments Or Suggestions: There is a typo int he title of Appendix C: the "t" at the end of "budget" is missing.
The paper also repeatedly states that prior work "rely on classic trial-and-error exploration" (for instance, at the end of the second paragraph in Section 2). All RL approaches, including CoSo, rely on trial-and-error exploration. What differs in CoSo is that it prioritizes exploration over tokens. I think this could be made clearer.
In the second paragraph of Section 3, the VLM-based policy outputs a distribution over environment actions ($\pi_{\theta}(a|s)$), while in the third paragraph, it outputs a distribution over tokens. As the function mapping a sequence of tokens to an environment action has already been defined in the first paragraph, I think it would be clearer to consider the policy over tokens in the second and third paragraphs.
Finally, the beginning of Section 5 explains that the experiments provide an analysis of CoSo's computational budget, which is currently in appendices and never referenced in the main paper.
Questions For Authors: I do not have any further questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your detailed and thoughtful review. We truly appreciate you highlighting the strengths of our method and thinking it timely, well-discussed, easily applicable, and supported by insightful experiments. Regarding your questions, we provide our responses as follows.
> Q1. Why a smaller model?
We chose a smaller model mainly to **reduce computational cost and training time**. As shown in Appendix C and D, we found that the lightweight model already works well for SCM while introducing small overhead.
> Q2. How is the offline data collected?
The offline data used to train the SCM **comes from the SFT dataset in the original paper** (DigiRL and RL4VLM), which includes the agent's "observations" and its "text responses". For SCM's training, we only use the "text responses" as inputs and their corresponding action labels as ground truth.
> Q3. The ground truth in the CrossEntropy loss.
In our implementation, the SCM is trained as an action classifier. Thus, **the ground truth is the label of the parsed action** corresponding to the agent's text output.
For example, SCM input: "`Action Plan: [PRESS_HOME,DUAL_POINT, DUAL_POINT,DUAL_POINT] ; Action Decision: "action_type": ...`", SCM ground truth: `2` (assuming `PRESS_HOME` maps to index 2)
> Q4. What is the input of SCM?
- As explained above, the input is the **agent's text output $y$** (e.g., `Action Plan: [PRESS_HOME,DUAL_POINT, DUAL_POINT,DUAL_POINT] ; Action Decision: "action_type": ...`), then SCM predicts its correct environment action category.
- Moreover, when computing causal weights, we also nullify a specific token (e.g., the first token ($y^{-1}\cup y^1_{\text{null}}$)) and **feed the modified sentence** "`<pad> Plan: [PRESS_HOME,DUAL_POINT, DUAL_POINT,DUAL_POINT] ; Action Decision: "action_type": ...`" into the SCM to get token's causal influence.
> Q5. How accurate was SCM? Could a smaller model be used?
- During the training, the SCM achieves ~100% accuracy in AitW, ~90–100% in Gym Cards, and ~70–80% in ALFWorld.
- Yes, the smaller model can be used, especially in high-accuracy tasks. However, there is a trade-off between size and the quality of causal weights.
> Q6. What does "nullify" a token mean?
To "nullify" a token means **replacing it with a placeholder token (e.g., `pad_token` or `unk_token`)** to simulate its absence.
> Q7. Is it necessary to "nullify" each token? Does it require extra environment interactions?
Yes, we **need to nullify each token** to compute its causal importance. However, this process requires **no extra environment interactions**.
- As shown in Alg. 1 (Lines 6–11), the 'Rollout phase' remains exactly the same as in a standard RL loop. In this step, the agent interacts normally with the environment and collects trajectory data **(without the SCM or token nullification)**.
- After that, we evaluate the token's importance using the stored transitions $(s_t, a_t, r_t, y_t)$ in the replay buffer $\mathcal{U}$. Specifically:
1. Feed the raw $y_t$ into the SCM and let SCM output the baseline action likelihood $\mathbb{P}(a_t|y_t)$.
2. Nullify a specific token and feed $y_t^{-i}\cup y^i_{\text{null}}$ obtain the action likelihood $\mathbb{P}(a_t|y_t^{-i}\cup y^i_{\text{null}})$.
3. Compute token's importance via $\mathcal{B}^i_{y_t\rightarrow a_t}=|\mathbb{P}(a_t|y_t)-\mathbb{P}(a_t|y_t^{-i}\cup y^i_{\text{null}})|$.
- The above process **relies only on the replay buffer $\mathcal{U}$ and the SCM model (without environment interactions)**. Thus, environment interaction cost stays the same.
> Q8. Extension to LLM agents
Thank you for your valuable suggestion. We totally agree that CoSo is not limited to VLM agents and it applies naturally to LLM agents operating in purely textual environments.
To show this, we include an experiment on `AlfredTWEnv`, a purely text-based benchmark in the ALFWorld where several LLM agents (e.g., ReAct, Reflexion) have been evaluated. We achieve RL4VLM and CoSo on `Qwen2.5-7B-Instruct` and train the LLM agents from scratch (without SFT) over 12,000 environment steps. Here's the result:
|AlfredTWEnv|Pick|Look|Clean|Heat|Cool|Pick2|Avg.|
|-|-|-|-|-|-|-|-|
|RL4VLM (LLM Agent)|62.9|**38.5**|35.0|33.3|25.9|**11.1**|32.8|
|CoSo (LLM Agent)|**77.1**|24.2|**40.7**|**37.5**|**35.3**|7.0|**39.6**|
> Reference
Thank you for the suggestion. We'll cite and discuss token-level credit assignment works like POAD in the revised Related Work section.
> Writing and presentation
Thank you for your helpful editorial suggestions. We'll fix the typos in the title of Appendix C, clarify the use of "trial-and-error", unify the policy definition over tokens in the second and third paragraphs in Section 3, and refine the beginning of Section 5 by adding a pointer to Appendix C.
Thank you again for all your constructive feedback. We will include these clarifications, especially the counterfactual reasoning, and updates in our revision.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their reponse.
They clarified all my concerns as well as my missunderstandings (e.g. concerning addititional environment interactions induced by the method). I also deeply appreciated the additional experiments using LLMs, which I believe can have a high impact.
Consequently, I will raise my score.
I also have a question regarding these additional experiments: If I recall well, RL4VLM first applies SFT to the VLM before using RL. You said you trained the LLMs from scratch, is it both for RL4VLM and CoSo?
---
Reply to Comment 1.1.1:
Comment: Thank you very much for raising your score! We are glad to answer your remaining question and would appreciate any further response.
> Is it both for RL4VLM and CoSo?
Yes — in the pure-text `AlfredTWEnv` shown in the rebuttal, we trained **both** RL4VLM and CoSo from scratch. This was mainly due to the following reasons:
1. The SFT dataset used in RL4VLM for ALFWorld contains **both image and text modalities**, making it unsuitable for directly fine-tuning LLM agents. Simply dropping the image input would result in incomplete environment information for the agent.
2. Collecting a new pure-text SFT dataset specifically for the LLM agent would have been quite **time-consuming**, especially within the short rebuttal period.
3. The pure-text environment is **much simpler** compared to the multimodal one, and the LLM agent (`Qwen2.5-7B-Instruct`) **already has a good initialization**, so training from scratch worked well in this case.
We hope this clarifies the experimental setup. | Summary: This paper investigates the fine-tuning of VLM agents through a two-stage offline-to-online process, with a particular focus on the online phase, termed CoSo. CoSo uses soft Q-learning to improve exploration within sequential reasoning frameworks, such as chain-of-thought (CoT). The entropy term is computed by summing the entropy values across output distributions. Unlike prior methods, CoSo incorporates causal-aware entropy values, derived using causal weights from a structured causal model (SCM). Specifically, the SCM assesses action distribution change for counterfactual token sequences, which are generated by nullifying a single token. A significant change in action likelihood suggests that the nullified token is essential for decision-making. As a result, CoSo assigns weights to each entropy value proportional to the likelihood change caused by nullifying the corresponding token. Experimental results show that CoSo consistently outperforms baselines across various VLM agent benchmarks, confirming the algorithm's effectiveness.
Claims And Evidence: The use of soft RL to enhance exploration is well-motivated and convincing, supported by its proven theoretical and empirical effectiveness in traditional RL. In contrast, although the causal weighting of entropy terms is intuitive and demonstrates empirical effectiveness, it lacks a clear theoretical foundation or logical justification to establish a strong correlation between the causal weights and the significance of the tokens. While the concept of importance weighting is compelling in principle, the proposed methodology, which employs causal weights with the given formulation, would benefit from further verification.
Methods And Evaluation Criteria: CoSo is applied on top of existing RL-based online VLM fine-tuning algorithms, implying high expandability and flexibility. The benchmarks, which consist of AitW, Gym Cards, and ALFWorld, are well-regarded in evaluating VLM agents and span various domains of VLM decision-making.
Theoretical Claims: I reviewed all aspects, including policy evaluation, policy improvement, and policy iteration for the framework used in CoSo. While there are no critical flaws, I noticed a few minor issues, such as unexplained shorthand notations (ex) A.6, A.11) and some typos (ex) A.7).
Experimental Designs Or Analyses: I reviewed most aspects of the experimental design and analysis, including the experimental setups and ablations. Since CoSo is implemented on top of the baseline, using the same hyperparameters as the baseline for the corresponding components appears to be a fair approach. Additionally, conducting quantitative analyses, such as token generation visualization and causal weight heatmaps, significantly enhances the interpretability of the results. However, the ablation studies could be further refined. A hyperparameter sensitivity analysis, particularly with respect to the learning rate in the SCM or the entropy coefficient, could provide additional insights into the robustness of the algorithm. Furthermore, exploring alternative weighting strategies, such as attention weights or approaches similar to the Shapley value, could offer valuable comparisons. Please refer to the questions for more details.
Supplementary Material: I noticed that the source code is provided in the supplementary material and I briefly reviewed it without running the code.
Relation To Broader Scientific Literature: Since VLM agents can be applied to various real-world scenarios such as web search and task automation, the ability of CoSo to enhance the performance of VLM agents without requiring a larger model or extensive training resources is noteworthy. This suggests that CoSo might enable the use of smaller VLM models while maintaining satisfactory performance, potentially improving accessibility and efficiency.
Essential References Not Discussed: One of the key components of CoSo is the structured causal model (SCM), but there is no citation related to this. Also, such SCM is BERT-based, thus BERT paper should be referenced.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: 1. For eq. 6, it would be preferable to use a single vertical line to denote absolute value, rather than a double vertical line.
Questions For Authors: 1. For causal weights, we can use a likelihood difference of all actions, for instance, divergences, instead of considering only a likelihood difference from the chosen action. Can you provide the results?
2. It seems that there is no ablation study for the learning rate in the SCM or the entropy coefficient. Can you provide the results, particularly the latter?
3. As observed in Appendix D, the model seems to treat intermediate reasoning components inconsistently, sometimes overlooking them and at other times considering them important. What are underlying tendencies that might explain this phenomenon?
4. Is there any rationale for using causal weights to weight entropy values, other than the ones provided in the paper?
5. Instead of calculating causal weights, another simpler approach might be to use attention weights. Can you provide the results using them?
6. The methodology used for calculating causal weights is based on the leave-one-out (LOO) scheme, which is often compared to the Shapley value. Can you provide the results using the Shapley value or its variants, such as SHAP?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your in-depth and constructive feedback. We are also grateful for kindly pointing out that our method is well-motivated and convincing, compelling in principle, and implies high expandability and flexibility. Regarding your concerns, we provide our responses below:
> Q1: The results of using the likelihood difference of all actions.
We tried both **KL divergence and L2 distance** on Gym Cards to take all actions into account. Both produced **similar results** in terms of causal weights and overall performance. KL divergence is asymmetric and more sensitive to small distributional differences, so it may require more careful smoothing or regularization.
||KL Divergence|L2 Distance|Ours|
|-|:-:|:-:|:-:|
|NL|98.5|100.0|100.0|
|BJ|40.3|41.8|41.5|
> Q2: Ablation study for the learning rate and entropy coefficient.
We provide ablations about the learning rate and the entropy coefficient on the Gym Cards NL task below. We found that CoSo **needs an appropriate learning rate and entropy coeff to work best**. For example, if entropy coeff is too large, it excessively perturbs the output distribution, reducing the agent's stability; if too small, it approximates a setting without exploration incentives.
|Learning Rate|Result|\||Entropy Coeff|Result|
|:-:|:-:|:-:|:-:|:-:|
|1e-3|96.3|\||10.0|78.8|
|1e-4|100.0|\||1.0|100.0|
|1e-5|100.0|\||0.1|98.3|
|1e-6|94.3|\||0.01|95.5|
|1e-7|90.8|\||0.001|88.5|
> Q3: Causal differences across intermediate reasoning components.
Actions like `DUAL_POINT` and `TYPE` typically involve **more complex token patterns** than simpler actions such as `PRESS_HOME` or `PRESS_ENTER`. Beyond the main action token, they often include **additional elements (e.g., coordinate values or type-specific content) that vary across instances**. These extra tokens can introduce nuanced information, making the SCM more **sensitive** to intermediate components.
> Q4: Rationale for using causal weights to weigh entropy values?
**Yes, the theory of weighted entropy has been formally studied in [1]**: $h_\varphi(p)=-\sum_i\varphi(x_i)p(x_i)\log p(x_i)$ (see Eq.(1.2) in [1]). Here $\varphi(x_i)$ is a non-negative weight function, presenting a value/utility of the outcome $x_i$. This formulation makes entropy context-dependent, assigning different weights based on the value of each outcome. In our case, we adopt causal weights as the utility function, assigning higher weights to more important tokens within an action.
[1] Kelbert, Mark, Izabella Stuhl, and Yuri Suhov. "Weighted entropy: basic inequalities." Modern Stochastics: Theory and Applications 4.3 (2017): 233-252.
> Q5 & Q6: Attention weights and Shapley value.
Thank you for your valuable suggestions. We evaluated the attention weight and the Shapley value and presented the results below.
- (Attention weights) We found that attention weights **perform suboptimally**. Since attention weights dynamically evolve during training, especially early stage of the training, they often lack stability and fail to consistently reflect token importance.
- (Shapley value) The computation of Shapley value scales exponentially with token count ($2^n$) which is computationally infeasible. Thus we use Monte Carlo sampling with 10 subsets. It produces **similar results as ours**. This similarity may be attributed to the fact that Eq. (6) in our paper can be interpreted as a subset-based approximation of the Shapley value. Nevertheless, the Shapley value is significantly more **computationally expensive**.
||Attention|Shapley value|Ours|
|-|-|-|-|
|NL|91.3|100.0|100.0|
> Reference
Thank you for pointing them out. We will include citations for SCM and the BERT paper in the updated version.
> Notations, typos and suggestions
We appreciate the detailed feedback. We will improve the clear definitions of all notations in (A.6) and (A.11), fix the typos in (A.7), and update Eq.(6) to use a single vertical line for denoting absolute value, as suggested.
Thank you again for your valuable and encouraging feedback. We will incorporate all these clarifications and improvements into our revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I have updated my recommendation accordingly.
---
Reply to Comment 1.1.1:
Comment: We appreciate your comments and positive response. Thank you for all the time and efforts in reviewing this paper. | Summary: This paper proposes CoSo, a reinforcement learning approach for finetuning the VLM agent. The theoretical analysis shows that CoSo can guarantee the property of convergence and performance. Experimental results demonstrate that CoSo achieves superior performance on a range of control tasks.
Claims And Evidence: The authors define the metric (Eq. (4) and (6)) to measure the effect of the token $y^i$ for a given action $a$ which is the likelihood difference between $y$ and $y^{-i} \cup y^{i}_{null}$. Later on, the casual-weighted entropy is defined in Eq. (7), the weighted term aims to measure one token $y^i$ and all tokens $y$ instead of the corresponding previous tokens $y^{1:i-1}$. To the reviewer, the weighted term $\mathcal{B}^i\_{y\rightarrow a}$ does not align with the entropy term $\mathcal{H}(y^i|y^{1:i-1})$.
Methods And Evaluation Criteria: To evaluate the efficiency of the proposed approaches, this paper adopts AitW, Gym Cards and ALFWorld which are commonly-used and challenging benchmarks.
Theoretical Claims: I have checked the correctness of Eq. (2), Lemma 4.2, Lemma 4.3 and Proposition 4.4.
Experimental Designs Or Analyses: I have checked the soundness and validity of the experimental designs and analyses.
Supplementary Material: I have reviewed the supplementary material (source code).
Relation To Broader Scientific Literature: This paper provides a novel RL method to enhance the learning efficiency of VLM agents in addressing challenging control tasks (e.g., AitW [1]).
CoSo is an interesting RL method based on counterfactual reasoning, and it has the potential to be applied to address other challenging control tasks.
[1] Rawles, Christopher, et al. "Androidinthewild: A large-scale dataset for android device control." Advances in Neural Information Processing Systems 36 (2023): 59708-59728.
Essential References Not Discussed: Some related papers can be considered for discussion, but it is not compulsory.
[1] Bai, Hao, et al. "Digi-Q: Learning Q-Value Functions for Training Device-Control Agents." arXiv preprint arXiv:2502.15760 (2025).
[2] Wang, Taiyi, et al. "Distrl: An asynchronous distributed reinforcement learning framework for on-device control agents." arXiv preprint arXiv:2410.14803 (2024).
[3] Wu, Qingyuan, et al. "VSC-RL: Advancing Autonomous Vision-Language Agents with Variational Subgoal-Conditioned Reinforcement Learning." arXiv preprint arXiv:2502.07949 (2025).
Other Strengths And Weaknesses: None.
Other Comments Or Suggestions: None.
Questions For Authors: To the reviewer, the weighted term $\mathcal{B}^i\_{y\rightarrow a}$ does not align with the entropy term $\mathcal{H}(y^i|y^{1:i-1})$.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your valuable comments and the time you took to review our work. We also appreciate your positive remarks about the potential applicability of our method to other challenging control tasks. Regarding your questions, we provide our responses below:
> Q1. The weighted term and the entropy term
- Thanks for the question. $\mathcal{B}^{i}_{y \rightarrow a}$ denotes the causal weight of token $y^i$, while $\mathcal{H}(y^i | y^{1:i-1})$ denotes the conditional entropy of the same token. They are therefore **aligned** in the objective.
- Moreover, the causal weight $\mathcal{B}_{y\rightarrow a}^i$ is **used only as a scalar factor** (without gradient) for the entropy term $\mathcal{H}(y^i | y^{1:i-1})$. As such, the exact computation of the causal weight is flexible, and in practice, it can be implemented in different ways—globally, locally, or even based on random subsets of $y^{1:n}$ like Shapley Value (as mentioned by Reviewer igVU).
> Reference
Thanks for the helpful suggestions. We will add the recommended references and include a discussion of them in the updated Related Work section.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response, and my concerns have been addressed.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's comments and positive response. We thank the reviewer for all the time and efforts in reviewing this paper. | null | null | null | null | null | null |
Probabilistic Group Mask Guided Discrete Optimization for Incremental Learning | Accept (poster) | Summary: This paper concerns the parameter-isolation methods in incremental learning. However, existing approaches often disregard parameter dependencies, resulting in an over-reliance on newly allocated parameters. To address this issue, this paper proposes Probabilistic Group Mask selection (PGM), a group-wise approach that captures parameter dependencies by exploring candidate masks within each group. Specifically, PGM partitions parameters into groups with multiple candidate masks, assigning probabilities to these masks and leveraging Gumbel-Softmax for differentiable sampling, enabling efficient optimization of the discrete mask selection process. The theoretical analysis demonstrates that incorporating parameter dependencies enhances sub-network selection. Experiments conducted on standard benchmarks confirm its superior effectiveness compared to existing IL approaches.
## update after rebuttal
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes. It makes sense.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes. The soundness and validity of experimental designs or analyses are make sense.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: This paper is related to class-incremental learning, which is widely researched.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths
1. Innovative Approach: This paper proposes Probabilistic Group Mask selection (PGM), a group-wise approach that captures parameter dependencies by exploring candidate masks within each group. The group-wise probabilistic masking strategy effectively captures parameter dependencies, a novel contribution to parameter-isolation methods.
2. Theoretical Grounding: The theoretical analysis demonstrates that incorporating parameter dependencies enhances sub-network selection. The formal analysis of parameter reuse and error reduction via dependency modeling strengthens the method’s motivation.
3. Empirical Results: PGM outperforms baselines like WSN across metrics, particularly in reducing parameter capacity while maintaining accuracy. Meanwhile,the further analysis reveal the effectiveness of each module.
4. Good Analysis: The ablation studies, computational efficiency tests, and visualizations (e.g., dependency patterns, mask distributions) provide valuable insights.
Weaknesses:
1. Limited Task Diversity: Experiments focus on image classification; testing on non-vision tasks (e.g., NLP) would strengthen generalizability claims.
2. Group Size Sensitivity: While larger groups improve performance, the plateau effect at higher K is not thoroughly analyzed (e.g., computational trade-offs).
Other Comments Or Suggestions: 1. The writing is clear, but the appendix could better explain implementation details (e.g., hyperparameters for Gumbel-Softmax).
Questions For Authors: 1. Could the method be extended to class-incremental learning (CIL) without task IDs?
2. What is the overhead of maintaining group-wise masks for large-scale models (e.g., Transformers)?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable comments.
**Generalization to Non-Vision tasks:** To ensure fairness and comparability, we adopt the same task setup as in [1,2], which are widely used as baselines for evaluating incremental learning performance. To further assess the generalization capability of our method beyond the vision domain, we extend it to an audio classification task using the KineticsSounds dataset [3]. The dataset is divided into five incremental tasks, denoted as KS-5. As shown in Table 1, PGM outperforms WSN in both accuracy and parameter capacity when using the ResNet18 architecture.
**Table 1.** Performance evaluation on the KS dataset.
|Architecture|Method|KS-5||
|-|-|-|-|
|||Acc↑|CAP↓|
|ResNet18|WSN|69.43|76.44|
||PGM|**70.44**|**57.41**|
**Analysis of the Plateau Effect at Larger Group Sizes:** While the theoretical search space for mask combinations grows exponentially with group size $K$, our Gumbel-Softmax sampling reduces the practical complexity to linear time (i.e., $\mathcal{O}(K)$). As shown in Figure 4a of the original paper, while computational overhead increases only marginally with larger \(K\), the average accuracy begins to plateau beyond a certain threshold. More detailed hyperparameter settings will be given in the final version.
**Extended to Class Incremental Learning**: Extending task-incremental learning to class-incremental learning requires accurate task identity recognition. Prior works [4,5] have shown that accurate task-ID prediction in the CIL setting relies on strong out-of-distribution (OOD) detection capabilities. To this end, we integrated the Energy-based OOD detection method [6] into our framework. As shown in Table 2, PGM achieves higher accuracy than WSN on both the Last and AIA metrics, suggesting that the subnetworks selected by PGM demonstrate stronger OOD detection capability.
**Table 2.** CIL performance comparison using DeiT architecture. AIA is the average incremental ACC. Last is the ACC after learning the final task.
|Architecture|Method|CIFAR100-10||
|-|-|-|-|
|||Last↑|AIA↑|
|Deit|WSN+Energy|62.21|75.19|
||PGM+Energy|**64.43**|**77.06**|
**Training Time on Large Scale Models**: To ensure fairness and comparability, we adopt the same model architectures as in [1,2], which are widely recognized as baselines for incremental learning. To further assess the training time of PGM across different architectures, we follow the settings in [5,7] and evaluate it on both ResNet18 and DeiT, comparing against parameter-isolation methods that require no additional storage and exhibit no forgetting. As shown in Table 3, the joint evaluation and optimization of grouped parameters introduce noticeable computational overhead as model size increases. While this results in longer training time, it helps reduce the risk of suboptimal parameter selection by explicitly modeling parameter dependency.
**Table 3.** Performance evaluation across different model architectures.
|Architecture|Method|CIFAR100-10|||
|-|-|-|-|-|
|||Acc↑|CAP↓|Time(h)↓|
|ResNet18|WSN|73.51|90.66|0.32|
||PGM|**75.37**|**71.59**|0.43|
|DeiT|WSN|93.78|69.46|0.55|
||PGM|**94.21**|**60.65**|0.76|
**References**:
[1]. Haeyong Kang, et.al. Forget-free continual learning with winning subnetworks. *ICML*, 2022.\
[2]. Yusong Hu, et.al. Task-aware Orthogonal Sparse Network for Exploring Shared Knowledge in Continual Learning. *ICML*, 2024.\
[3]. Relja Arandjelovic, et.al. Look, listen and learn. *ICCV*, 2017.\
[4]. Gyuhak Kim, et.al. A Theoretical Study on Solving Continual Learning. *NeurIPS*, 2022\
[5]. Haowei Lin, et.al. Class incremental learning via likelihood ratio based task prediction. *ICLR*, 2024.\
[6]. Weitang Liu, et.al. Energy-based out-of-distribution detection. *NeurIPS*, 2020.\
[7]. Md. Sazzad Hossain, et.al. Rethinking Task-Incremental Learning Baselines. *ICPR*, 2022. | Summary: The authors propose PGM, a dependency-aware parameter-isolation framework for IL that optimizes task-specific subnetworks via probabilistic group masking. By grouping parameters and sampling masks with Gumbel-Softmax, PGM improves parameter reuse and reduces capacity overhead. Theoretical proofs link dependency modeling to improved sub-network selection. Extensive experiments are conducted to show the SOTA results on standard benchmarks. The work includes ablation studies, efficiency analysis, and visualizations of parameter distributions. The author has thoroughly substantiated the paper’s arguments from multiple perspectives, including methodology, theoretical analysis, and experiments, thereby strongly supporting the proposed viewpoints.
Claims And Evidence: Yes. The main claims in this paper are supported from methodology, theoretical analysis and experiments.
Methods And Evaluation Criteria: Yes. The PGM is well-designed and effectively address the identified challenges in Incremental learning.
Theoretical Claims: Yes. The proposed theory, i.e., Error Reduction via Dependency, is rigorously proven to be correct and strongly supports the viewpoints of the paper.
Experimental Designs Or Analyses: Main experimental results demonstrate the superiority of the PGM. And the authors provide the ablation study, Computational Efficiency, Parameter Dependency, Different Layer Parameter Distribution to support the claims of the paper.
Supplementary Material: Yes. All the supplementary materials have been reviewed. The authors provide the details of the theory and experimental implementations in the supplementary materials.
Relation To Broader Scientific Literature: The key contributions of the paper are related to the incremental learning.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths
1. Practical Impact: The efficient parameter reuse mechanism in PGM significantly reduces memory and computational overhead, making it particularly well-suited for deployment in resource-constrained environments.
2. Differentiable Optimization: The Gumbel-Softmax reparameterization elegantly handles discrete mask selection, enabling end-to-end training. The learning algorithm is well-designed and novel.
3. Task-Informed Initialization: Leveraging prior task masks for initialization is an innovative strategy that enhances transferability by effectively retaining relevant knowledge from previous tasks.
4. The paper rigorously validates its claims from multiple perspectives, including methodological soundness, theoretical justification, comprehensive ablation studies, efficiency analysis, and insightful visualizations. This multifaceted evaluation ensures a thorough understanding of the proposed approach, reinforcing its reliability and effectiveness.
Weaknesses
1. Long-Task Scalability: The experiments are limited to sequences of 5–10 tasks, which may not fully capture the scalability of the approach. Extending the evaluation to longer task sequences would provide deeper insights into its long-term performance, stability, and adaptability in more complex scenarios.
2. How does PGM ensure stability when dependencies conflict across tasks (e.g., Task 1’s critical parameters are Task 2’s noise
3. For the TinyImageNet results, why is the ACC improvement over SPG smaller compared to CIFAR-100?
Other Comments Or Suggestions: 1. The dependency analysis (Fig. 5b) is insightful but could be expanded—e.g., how do dependencies vary across layers/tasks?
2. The impact of group size K on training time (Fig. 6a) is underdiscussed. A complexity analysis would help.
Questions For Authors: Could the method integrate rehearsal or generative replay to further reduce forgetting?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable comments.
**Extending to Longer Task Sequences:** To ensure fairness and comparability, we adopt the same task configurations as in [1,2] (i.e., 10, 20, and 40 tasks), which are widely recognized as baselines for evaluating incremental learning performance. To further assess performance under longer task sequences, we extend the evaluation on CIFAR-100 by increasing the number of tasks to 10, 20, and 50 under the DeiT architecture [4]. As shown in Table 1, PGM consistently achieves strong performance across all settings and demonstrates better scalability than WSN, particularly with respect to CAP.
**Table 1.** Performance evaluation on long task sequences using DeiT architecture.
|Architecture|Method| CIFAR100-10||CIFAR100-20||CIFAR100-50||
|-|-|-|-|-|-|-|-|
|||Acc↑|CAP↓|Acc↑|CAP↓|Acc↑|CAP↓|
|DeiT|WSN|93.78|69.46|97.68|72.84|98.40|76.25|
||PGM|**94.21**|**60.65**|**98.22**|**65.33**|**98.91**|**70.64**|
**Stability under Conflicting Dependency across Tasks and Layers:** Parameter dependency are modeled at the intra-layer levels, reflecting the fact that the importance of a parameter can be influenced by other parameters within the same task. To facilitate knowledge transfer while reducing task interference, we employ a similarity-based mask initialization strategy that modulates the influence of prior tasks according to their similarity to the current task. Greater task discrepancy results in lower similarity scores, which suppress the transfer of conflicting knowledge. While the current framework focuses on modeling intra-layer dependency, incorporating cross-layer dependency may further improve performance and generalization. This remains a promising direction for future work, and we plan to explore it in subsequent research.
**Different Performance Gains on TinyImageNet Compared to CIFAR-100:** The difference in ACC improvement can be attributed to variations in dataset characteristics and model configurations. TinyImageNet uses images with a resolution of 64×64, includes 40 tasks, and is trained with a network composed of four convolutional layers and three fully connected layers. In contrast, CIFAR-100 uses images with a resolution of 32×32 and includes 10 and 20 tasks, with AlexNet used for the 10-task setting and LeNet for the 20-task setting. These differences result in performance variation across the two benchmarks.
**Impact of Group Size K on Complexity and Training Time:** While the theoretical search space for mask combinations grows exponentially with group size $K$, our Gumbel-Softmax sampling reduces the practical complexity to linear time (i.e., $\mathcal{O}(K)$). This is achieved by directly learning parameter activation distributions, thereby avoiding exhaustive combinatorial search. As shown in Figure 4a of the original paper, performance gains tend to plateau beyond K = 8, suggesting that further increasing the group size yields limited benefit while introducing additional computational overhead.
**Extending to Class-Incremental Learning with Rehearsal Sample:** The proposed method is developed under the task-incremental learning setting, where parameter isolation methods prevents forgetting via task-specific subnetworks. However, as noted in [3], forgetting may re-emerge in the more challenging class-incremental learning (CIL) setting due to the absence of task identity during inference. To address this, [3] introduces OOD detection for implicit task inference, and [4] further enhances it with rehearsal samples. Following the protocol in [4], we replace their mask selection strategy with ours, and compare against parameter-isolation baselines without additional memory. As shown in Table 2, our method achieves competitive performance in the CIL setting.
**Table 2.** CIL performance comparison using DeiT architecture. AIA is the average incremental ACC. Last is the ACC after learning the final task.
|Architecture|Method|CIFAR100-10||
|-|-|-|-|
|||Last↑|AIA↑|
|Deit|WSN+TPL|67.89|80.93|
||PGM+TPL|**69.55**|**81.78**|
**References**:
[1]. Haeyong Kang, et.al. Forget-free continual learning with winning subnetworks. *ICML*, 2022.\
[2]. Yusong Hu, et.al. Task-aware Orthogonal Sparse Network for Exploring Shared Knowledge in Continual Learning. *ICML*, 2024.\
[3]. Gyuhak Kim, et.al. A Theoretical Study on Solving Continual Learning. *NeurIPS*, 2022\
[4]. Haowei Lin, et.al. Class incremental learning via likelihood ratio based task prediction. *ICLR*, 2024. | Summary: This paper introduces Probabilistic Group Mask (PGM), a novel parameter-isolation method for incremental learning (IL) that addresses catastrophic forgetting by incorporating parameter dependencies during sub-network selection. PGM groups parameters, assigns probabilistic masks via Gumbel-Softmax Sampling, and dynamically initializes masks based on prior task similarities. Theorem 3.2 states that grouping parameters minimizes selection error by utilizing dependencies. Comprehensive experimental results on Split CIFAR100, CIFAR-100 Superclass, and Split TinyImagenet demonstrate PGM’s superior accuracy and parameter efficiency compared to existing methods like WSN [ICML’22], SPG [ICML’23], RP2F [ACMMM’24], with reduced reliance on new parameters and improved task differentiation.
Claims And Evidence: Yes. Claim “incorporating parameter dependencies” is supported by Theorem 3.2 and empirical results. Claim “Group-wise mask selection” is supported by derivation of differentiable sampling and ablation study.
Methods And Evaluation Criteria: Yes. The proposed PGM effectively addresses the challenges, i.e., catastrophic forgetting by incorporating parameter dependencies, in incremental learning.
Theoretical Claims: Yes. The theory proposed in the paper is sound. Theorem 3.2 establishes a mathematical basis, demonstrating how group-wise selection reduces error rates through variance reduction. Although based on a simplified assumption (Gaussian errors), this analysis is consistent with empirical findings (Fig. 2c) and highlights the group size K as an adjustable parameter balancing accuracy and complexity.
Experimental Designs Or Analyses: The author presents the extensive experiments including main comparison with recent state-of-the-art baselines of incremental learning, ablation study for effectiveness of key components, influence of group size, adaptability to diverse training paradigms.
Supplementary Material: Yes. I have checked the supplementary materials, including method details, implementation details, and other results. Relevant materials support the core claims of this paper.
Relation To Broader Scientific Literature: Yes. This paper focuses on core issue, i.e., catastrophic forgetting by incorporating parameter dependencies, in incremental learning.
Essential References Not Discussed: The references have been thoroughly discussed. All key references have been incorporated, particularly those concerning the most recent advancements in incremental learning.
Other Strengths And Weaknesses: Strengths:
1. Novel integration of probabilistic group masking and parameter dependencies. PGM introduces a group-wise strategy that explicitly models parameter interactions, a significant departure from existing methods that treat parameters independently.
2. Strong empirical results across metrics and datasets. The method achieves state-of-the-art results on Split CIFAR-100, CIFAR-100 Superclass, and Split TinyImageNet, outperforming parameter-isolation baselines like WSN and regularization methods like GPM. Furthermore, the authors analyze the computational cost, parameter dependency, task differentiability.
3. Effective use of Gumbel-Softmax for differentiable optimization. The reparameterization of discrete mask selection via Gumbel-Softmax (Eq. 4) enables end-to-end training while preserving task-specific adaptability.
4. Comprehensive ablation study validating design choices. The paper rigorously validates individual contributions of the Group and Mask Initialization modules (Table 2), showing that their combination drives performance gains.
Weaknesses:
1. Limited scalability analysis for large models: Experiments focus on smaller architectures (e.g., AlexNet, LeNet), leaving scalability to modern networks (e.g., ResNet, ViTs) unaddressed.
2. Superficial treatment of dependency patterns: Although Figure 6 visualizes localized dependencies in convolutional layers, the work does not quantify their impact (e.g., correlation strength) or explore adaptive grouping strategies.
Other Comments Or Suggestions: See weaknesses.
Questions For Authors: How does PGM handle scenarios where parameter dependencies are non-local (e.g., in transformers)?
Why is CAP defined with Huffman encoding (Sec. 4.1), and how does this affect interpretation?
Furthermore, please refer to weakness for more questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable comments.
**Scalability to Modern Architectures:** To ensure the fairness and comparability of results, we adopt the same model architectures as in [1,2], which are widely recognized as baselines for evaluating incremental learning performance. To further assess the generalization capability of PGM across different architectures, we evaluate it on both DeiT and ResNet18 following the settings in [3,4], and compare it with parameter-isolation methods that incur no additional storage cost and exhibit no forgetting. As shown in Table 1, PGM consistently demonstrates robust performance across all evaluated configurations. Notably, the greater parameter reduction observed on ResNet architectures can be attributed to the larger number of convolutional layers, where modeling parameter dependency tends to be more effective. In contrast, Transformer-based architectures contain more linear layers, where parameter dependency are inherently weaker, leading to comparatively smaller gains.
**Table 1.** Performance evaluation across different model architectures.
|Architecture|Method|CIFAR100-10||
|-|-|-|-|
|||Acc↑|CAP↓|
|ResNet18|WSN|73.51|90.66|
||PGM|**75.37**|**71.59**|
|DeiT|WSN|93.78|69.46|
||PGM|**94.21**|**60.65**|
**On Correlation Quantification and Adaptive Grouping Strategies:** Figure 6 of the original paper shows that parameter interactions are predominantly local, with each parameter mainly influenced by its immediate neighbors, while the impact of distant parameters is considerably weaker. This observation supports the assumption that grouped parameter blocks can be treated as approximately independent, suggesting that a simplified grouping strategy is sufficient to capture essential dependencies without significant loss in modeling capacity. Incorporating correlation-aware analysis and adaptive grouping mechanisms remains a promising direction for future research.
**Huffman Encoding in CAP Calculation:** To ensure fair comparison with prior work [1], we adopt Huffman encoding for consistency. The binary nature of the mask (i.e., 0/1 values) aligns well with Huffman's optimal prefix coding scheme, allowing for efficient compression and significant reduction in mask storage.
**References**:
[1]. Haeyong Kang, et.al. Forget-free continual learning with winning subnetworks. *ICML*, 2022.\
[2]. Yusong Hu, et.al. Task-aware Orthogonal Sparse Network for Exploring Shared Knowledge in Continual Learning. *ICML*, 2024.\
[3]. Haowei Lin, et.al. Class incremental learning via likelihood ratio based task prediction. *ICLR*, 2024.\
[4]. Md. Sazzad Hossain, et.al. Rethinking Task-Incremental Learning Baselines. *ICPR*, 2022. | Summary: The paper introduces Probabilistic Group Mask (PGM) for incremental learning (IL), addressing catastrophic forgetting by modeling parameter dependencies during sub-network selection. Unlike prior methods that independently score parameters, PGM partitions parameters into groups and optimizes task-specific masks via probabilistic sampling within each group, using Gumbel-Softmax to enable differentiable optimization. Theoretical analysis shows grouping reduces selection errors by leveraging local parameter interactions, while task-aware initialization enhances reuse by aligning mask probabilities with prior task similarities. Experiments on Split CIFAR-100, CIFAR-100 Superclass, and Split TinyImageNet demonstrate PGM's superiority, achieving state-of-the-art accuracy and near-zero forgetting, underscoring its effectiveness in balancing stability and plasticity for scalable IL.
Claims And Evidence: The claims made in the submission are well-supported by both theoretical analysis and empirical results.
Methods And Evaluation Criteria: The paper presents a novel theoretical framework for understanding parameter selection in incremental learning. The authors introduce the concept of parameter reuse with dependency (Definition 3.1) and demonstrate through Theorem 3.2 that group-wise selection reduces evaluation errors by capturing local parameter interactions. This theoretical foundation justifies the group-wise approach and provides insight into why considering parameter dependencies leads to better sub-network selection.
Theoretical Claims: Yes
Experimental Designs Or Analyses: The experimental evaluation is comprehensive and rigorous. The authors test PGM on three standard benchmark datasets: Split CIFAR-100, CIFAR-100 Superclass, and Split TinyImageNet. They compare PGM against multiple state-of-the-art methods, including parameter isolation approaches (PackNet, SupSup, WSN), parameter regularization techniques (La-MAML, GPM, FS-DGPM), and other relevant baselines. The evaluation metrics include ACC (average classification performance), CAP (parameter capacity usage), and BWT (backward transfer). The results consistently show that PGM outperforms existing methods across all metrics.
Supplementary Material: The supplementary material includes detailed derivations of the theoretical claims, additional experimental results, and implementation details. This additional information strengthens the paper by providing complete technical details and supporting the main findings with more extensive data.
Relation To Broader Scientific Literature: The paper situates itself within the broader incremental learning literature, building upon and advancing parameter isolation methods. It acknowledges previous work on mask-based approaches while highlighting the novel contribution of incorporating parameter dependencies through group-wise selection. The approach aligns with recent trends in efficient machine learning that seek to optimize model capacity while maintaining performance.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Advantages:
1. Theoretical Contributions: The paper provides a novel theoretical framework for understanding parameter selection in incremental learning, which is a significant contribution to the field.
2. Practical Efficiency: The method balances computational efficiency with performance, making it feasible for real-world applications.
3. Scalability: The group-wise approach allows the method to scale effectively with increasing numbers of tasks and parameters.
Disadvantages:
1. Implementation Complexity: The probabilistic sampling and Gumbel-Softmax reparameterization may increase implementation complexity compared to simpler mask selection methods.
2. Hyperparameter Sensitivity: Performance may be sensitive to the choice of group size (K) and other hyperparameters, requiring careful tuning for optimal results.
3. Limited Generalization Analysis: While the method performs well on the tested datasets, the paper could benefit from more extensive analysis of its generalization capabilities across different types of tasks and model architectures.
Other Comments Or Suggestions: No
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable comments.
**Training Overhead and Parameter Sensitivity:** While probabilistic sampling and Gumbel-Softmax reparameterization may increase implementation complexity, this design enables differentiable and learnable mask selection, which is essential for effective parameter grouping. Larger group sizes $K$ can incur longer training time, and we observe that the performance improvement tends to plateau as $K$ increases (see Figure 4a of the original paper). Regarding hyperparameter sensitivity, the optimal value of $K$ may vary across datasets due to differences in task difficulty and parameter dependency. More detailed hyperparameter settings will be given in the final version.
**Generalization across Model Architectures.**: To ensure the fairness and comparability of results, we adopt the same model architectures as in [1,2], which are widely recognized as baselines for evaluating incremental learning performance. To further assess the generalization capability of PGM across different architectures, we evaluate it on both DeiT and ResNet18 following the settings in [3,4], and compare it with parameter-isolation methods that incur no additional storage cost and exhibit no forgetting. As shown in Table 1, PGM consistently demonstrates robust performance across all evaluated configurations. Notably, the greater parameter reduction observed on ResNet architectures can be attributed to the larger number of convolutional layers, where modeling parameter dependency tends to be more effective. In contrast, Transformer-based architectures contain more linear layers, where parameter dependency are inherently weaker, leading to comparatively smaller gains.
**Table 1.** Comparative performance evaluation across different model architectures.
|Architecture|Method|CIFAR100-10||
|-|-|-|-|
|||Acc↑|CAP↓|
|ResNet18|WSN|73.51|90.66|
||PGM|**75.37**|**71.59**|
|DeiT|WSN|93.78|69.46|
||PGM|**94.21**|**60.65**|
**Generalization across Different Task Types:** To further evaluate the generalization capability of our method beyond the vision domain, we extend it to an audio classification task using the KineticsSounds dataset [5]. The dataset is partitioned into five incremental tasks, referred to as KS-5. As shown in Table 2, PGM outperforms WSN in both accuracy and parameter capacity when using the ResNet18 architecture.
**Table 2.** Comparative performance evaluation on the KS dataset.
|Architecture|Method|KS-5||
|-|-|-|-|
|||Acc↑|CAP↓|
|ResNet18|WSN|69.43|76.44|
||PGM|**70.44**|**57.41**|
**References**:
[1]. Haeyong Kang, et.al. Forget-free continual learning with winning subnetworks. *ICML*, 2022.\
[2]. Yusong Hu, et.al. Task-aware Orthogonal Sparse Network for Exploring Shared Knowledge in Continual Learning. *ICML*, 2024.\
[3]. Haowei Lin, et.al. Class incremental learning via likelihood ratio based task prediction. *ICLR*, 2024.\
[4]. Md. Sazzad Hossain, et.al. Rethinking Task-Incremental Learning Baselines. *ICPR*, 2022.\
[5]. Relja Arandjelovic, et.al. Look, listen and learn. *ICCV*, 2017. | Summary: In this paper, the author proposed probabilistic group mask selection, which aims to group parameters and explore the dependencies between them. In addition, the author used gumbel-softmax to make the sampling differentiable. To verify the method, the author conducted experiments on multiple datasets.
Claims And Evidence: The author's claim is consistent with his technique and motivation. The main purpose is to explore the application of parameter isolation in the field of continual learning. More precisely, it is to explore the application of pruning in continual learning.
Methods And Evaluation Criteria: Yes, the proposed methods and techniques are all related to conditional masks, including mask selection, optimization and initialization. However, I don’t see the specific application of dependency in the whole method. The more obvious point is that the final differentiable mask is generated by weighting different masks and task-informed mask initialization. But I’m not sure if this is the dependency discussed by the author, and I hope the author can explain this part.
Theoretical Claims: Since it is an urgent review, I took a quick look at it. If there are any questions later, I will add them.
Experimental Designs Or Analyses: The experimental datasets selection is more in line with the measurement standards in the field. But the experimental effect does not seem to be very significant? As the sparsity index is significant, so such experimental results are acceptable. My question is that it seems not uncommon to generate a group of different masks in continual learning, so I am a little skeptical about the novelty of doing so.
Supplementary Material: None.
Relation To Broader Scientific Literature: It is helpful for parameter isolation methods in continual learning.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Ablation experiments are quite sufficient.
Other Comments Or Suggestions: As mentioned above, the issues of novelty and dependency need to be explained by the author.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable comments.
**Clarifying the Role of Dependency in Our Method:** Modeling parameter dependency during subnetwork selection is the key contribution of this work, enabling more effective parameter allocation in incremental learning. Specifically, this involves two key aspects: (1) **Dependency** refers to the notion that the importance of a parameter should be evaluated in the context of its interactions with other parameters [1], in order to better reflect the collective contribution of parameter subsets. (2) To implement this idea, we partition parameters into groups and learn a categorical distribution over all possible selection combinations within each group, enabling joint evaluation of parameter subsets and thus capturing intra-group dependency. \
In addition to dependency modeling, we introduce **task-informed mask initialization** to initialize task-specific mask distributions by leveraging similarities with prior task masks, thereby promoting efficient parameter reuse while preserving task-specific adaptability.
**Clarifying the Novelty of Dependency-Aware Subnetwork Selection:** Parameter-isolation based incremental learning methods aim to assign compact yet effective subnetworks to individual tasks, thereby reducing capacity overhead and mitigating forgetting. However, existing approaches typically assume parameter independence, selecting parameters based on weight magnitude [2] or learnable importance scores [3], without considering how the importance of one parameter may depend on others. This assumption can lead to inaccurate parameter importance estimation, ultimately resulting in suboptimal subnetworks. To address this limitation, we propose a dependency-aware approach, Probabilistic Group Masking (PGM), which explicitly models parameter dependency during subnetwork selection. Specifically, (1) At the methodological level, PGM partitions parameters into multiple groups and evaluates all possible combinations within each group to capture intra-group dependency. Based on this modeling, PGM performs probabilistic sampling within each group to generate task-specific masks, with the entire process made differentiable via Gumbel-Softmax reparameterization. This design enables more dependency-aware subnetwork construction by jointly evaluating parameter combinations, thereby facilitating the selection of higher-quality subnetworks and enhancing the reuse of previously activated parameters. (2) At the theoretical level, we provide a theoretical analysis showing that modeling parameter dependency significantly reduces the risk of selecting suboptimal parameters. (3) At the empirical level, we conduct extensive experiments, including comparisons with state-of-the-art baselines, ablation studies of key components, and analysis of group size. The results consistently demonstrate that our dependency-aware approach reduces overall parameter usage while maintaining or even improving task performance.
**References**:
[1] Denis Kuznedelev, et al. CAP: Correlation-Aware Pruning for Highly-Accurate Sparse Vision Models. *NeurIPS*, 2023.
[2] Arun Mallya, et al. PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning. *CVPR*, 2020.
[3] Haeyong Kang, et al. Forget-free Continual Learning with Winning Subnetworks. *ICML*, 2022. | null | null | null | null |
Benign Overfitting in Token Selection of Attention Mechanism | Accept (poster) | Summary: This paper explores benign overfitting in the token selection process of attention mechanisms, analyzing how transformers generalize despite fitting noisy training labels. It adopts feature learning framework to explain when models ignore noise (high SNR) or fit it while still generalizing well. Through theoretical analysis and experiments on both synthetic and real-world datasets, the paper demonstrates how training dynamics influence token selection.
Claims And Evidence: This paper is well written, with rigorous theoretical analysis and clear experimental evidence.
Methods And Evaluation Criteria: The methods and evaluation criteria makes sense for the problem.
Theoretical Claims: The theoretical analysis is rigorous. By adopting feature learning framework, the authors analyzed the training dynamic systems and give the description of growth rate of the attention. I believe the rate is analyzed from some implicit equations but I did not check the details.
Experimental Designs Or Analyses: The experiment results further supported the theory.
Supplementary Material: I did not check all the proofs, but looked into several parts. The proof is well written, and the thoughts can be easily followed.
Relation To Broader Scientific Literature: I appreciate the authors efforts to compare their results with existing works and highlight their contributions.
Essential References Not Discussed: _
Other Strengths And Weaknesses: Several comments:
[1] It would be beneficial for the authors to conduct experiments with varying levels of noise in real-world datasets to further validate their findings. Additionally, a heatmap visualization on real data would provide valuable insights into the model’s behavior under different noise conditions.
[2] While the authors aim to introduce a brief proof outline in Section 4.1, the presentation feels more like a summary of multiple results. A more detailed explanation would enhance clarity. I recommend adding a section in the Appendix to provide an overview of the proof, including key derivations. For instance, is it possible to give a short discussion on how is the function $g(x) = 2x + 2 \sinh(x - \log T)$ developed?
Other Comments Or Suggestions: See above.
Questions For Authors: I notice that in this paper, the authors assume that the number of iterations $T$ can grow exponentially. I am curious about the potential impact of this assumption on the conclusions. Since $\log T$ can be polynomial, are the results dependent on the exponential property in the loss function? Additionally, do the proofs establish that certain desirable properties hold for all t, allowing $\log T$ to grow polynomially instead?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your detailed review. We appreciate that you highly evaluate our work.
---
> 1: It would be beneficial for the authors to conduct experiments with varying levels of noise in real-world datasets to further validate their findings. Additionally, a heatmap visualization on real data would provide valuable insights into the model’s behavior under different noise conditions.
In response to your comment, **we are making the following additions to Section G.2**:
1. Table similar to Table 2, presenting results for different label noise levels $\eta = 0$ and $0.2$ in addition to the $\eta = 0.1$ results in the main text
2. Heatmap illustrating how accuracies change as the label noise $\eta$ varies continuously within the range $[0, 0.2]$ for each dataset.
While these results do not directly serve as an experimental validation of the SNR boundary in the theorem, we believe that they provide additional insights into the behavior of real-world datasets.
---
> 2: While the authors aim to introduce a brief proof outline in Section 4.1, the presentation feels more like a summary of multiple results. A more detailed explanation would enhance clarity. I recommend adding a section in the Appendix to provide an overview of the proof, including key derivations. For instance, is it possible to give a short discussion on how is the function $g(x)=2x+2\sinh(x−\logT)$ developed?
To address your concern and enhance the motivation, we have made the following improvements in Section 4.1 and Appendix:
- To prevent the abrupt start of Section 4.1, **we have added the following overview after left column, line 308**:
"In the analysis of benign overfitting, it is necessary to track the model behavior on the training data while also its generalization ability. We first present the result of training dynamics, and the generalization result is shown at the end of this section in Lemma 4.8."
- **We have clarified the reason for introducing Definition 4.4** by adding the following sentence after left column, line 310:
"This quantity is useful for evaluating the softmax probability at each time step $\tau$ for a given training example."
- **We have added the motivation of the function $g(x)$ at line 329** as follows:
"This function naturally arises when expressing the evolution of the attention gap using the weight updates in Equations 4 and 5 and evaluating the dynamics via the quadrature method. Please refer to the Appendix for further details on the derivation."
- **We have added a brief explanation of how the dynamics of the attention gap are derived after Lemma 4.5**:
"This lemma is shown by tracking the gradient descent dynamics and conducting induction argument with several desirable properties."
- **We have added a new section titled "Proof Sketch" before Section C** to provide an overview of the proof structure and the motivation behind our analytical approach.
The motivation for using the function $g(x) = 2x + 2\sinh(x - \log T)$, where $T$ is the sequence length of the inputs, naturally arises in the analysis of token selection dynamics. Specifically, the parameter updates are governed by the softmax probability term $s(\tau)(1 - s(\tau))$, which can be expressed in terms of the "attention gap" in Definition 4.4 and the equation $(2 + 2\cosh(x - \log T))^{-1}$ by Lemma E.1 and E.2. The function $g(x)$ appears naturally when applying the quadrature method to analyze the training dynamics (please refer to the discussion around Equation 96).
---
> 3: I notice that in this paper, the authors assume that the number of iterations $T$ can grow exponentially. I am curious about the potential impact of this assumption on the conclusions. Since $\log T$ can be polynomial, are the results dependent on the exponential property in the loss function? Additionally, do the proofs establish that certain desirable properties hold for all t, allowing $\log T$ to grow polynomially instead?
Thank you for your question. We answer the questions as follows.
1. As you correctly pointed out, the main theorem states that the required time steps for benign overfitting is in an exponential order. In our paper, please note that $T$ is used to denote sequence length rather than time steps.
2. This exponential time steps arises not from the shape of the loss function but from the softmax function in attention mechanism. We provide this intuition around line 376 in the main text.
3. As you noted, Lemmas D.11, D.12, and D.13 in the Appendix use an inductive argument to establish that some desired properties hold for all time steps. In particular, Proposition $C(\tau)$ implies that the evolution of the attention gap, as defined in Definition 4.4, follows a logarithmic order. Consequently, the number of time steps required for generalization scales exponentially. | Summary: This paper presents theoretical analysis of benign overfitting in token selection of the attention mechanism focusing on the training dynamics and the generalization performance. The author shows that under some conditions based on signal-to-noise ratio, "benign overfitting" phenomenon occurs in the attention mechanism.
Claims And Evidence: Both the theoretical analysis and experiments clearly support the claims.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: The proofs seem correct.
Experimental Designs Or Analyses: In the real-world experiments, the comparison among "not overfitting", "benign overfitting", and "harmful overfitting" is not shown. Maybe it needs more explanation and experiments for harmful overfitting.
Supplementary Material: No.
Relation To Broader Scientific Literature: This paper addresses the attention mechanism from the perspective of benign overfitting, which will guide the use of attention-based models.
Essential References Not Discussed: No more.
Other Strengths And Weaknesses: Strength:
1. The paper understands the token selection ability of attention mechanism from the "benign overfitting" aspect.
2. The mathematical analysis and proofs are solid and clear.
3. The paper proposes a novel method of using an auxiliary function $g(x)$ to describe the training of attention gap. This method successfully deals with the difficulty caused by the softmax probabilities.
Weakness:
1. The model may be a bit simple, involving only one attention layer without any additional nonlinearity. What if there is FFN and nonlinearity on top of the attention layer?
2. Fixing the value of $v$ and only training $W$ and $p$ during the training process is less challenging. Once $v$ is also included into the updating procedure, what will it influence the attention mechanism?
3. The assumptions in the paper seem very strong (A1-A8). It is not clear why we need these assumptions. It is recommended to contain some explanations about these assumptions.
4. The data distribution appears implausible due to exactly three distinct groups and a consistent scale $\rho$ that separates the relevant token from the weakly relevant token.
Other Comments Or Suggestions: No more.
Questions For Authors: Why does the author consider the function $g(x)$ (line 328) to analyze the attention gap? What's the motivation of utilizing this kind of function?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your detailed review. We hope that our answer has addressed all your concerns and pray this reply will change your decision.
---
> 1: The comparison among not, benign, and harmful overfitting lacks in real-world experiments. It needs more explanation and experiments for harmful overfitting.
The primary objective of real-world experiments is varying the training size $n$ to support the transition between the not overfitting and benign overfitting in Theorem 4.1. This is because we cannot control the values of $\\|\mu\\|_2$ and $d$ unlike synthetic experiments.
However, Table 2 also provide insights into harmful overfitting. For instance, in the AG-news dataset, while the model perfectly fits the noisy samples, increasing noisy samples leads to a decline in test accuracy, which is a harmful overfitting scenario.
In response to the review, **we are adding following experiments to Section G.2**: softmax dynamics in real-world settings similar to Figure 3 and new tables similar to Table 2 for different label noise $\eta$, to improve the generality of experimental results.
---
> 2, 3: What if there is FFN and nonlinearity on top of the attention layer? Once $\nu$ is also trained, what will it influence the attention mechanism?
If the FFN and nonlinearity are fixed, the problem remains a token selection problem given the token scores, and similar results hold. As explained below, **fixing the head $\nu$ is a necessary problem setting rather than an assumption or limitation of analysis**.
Since our focus is analyzing benign overfitting in **the token selection mechanism**, jointly optimizing the head would unnecessarily expand the components enabling noise memorization and benign overfitting. Specifically, when we show benign overfitting, it is unclear whether this is because of the token selection inside softmax or simply a consequence of the linear model $\nu$, which has already been extensively studied (e.g., [Bartlett+, 2020]).
Assumption 3.3 is necessary to formulate the problem of token selection given the assigned token scores. To clarify this point, **we have moved this part from Assumption to the Problem Setting**.
Furthermore, please note that this setting is practically plausible in the area of parameter-efficient fine-tuning, such as prompt tuning and LoRA.
Our study focuses on aligning with a more realistic data model instead of joint head optimization. We newly analyze the token selection in the presence of label noise (see Table 1), which highly complicates softmax dynamics by introducing different training directions within the same training run. Since the parameter updates depend on the softmax probabilities (Equations 8 and 11), addressing this difficulty requires carefully tracking and evaluating softmax dynamics, and a significant portion of our proof is dedicated to resolving this new challenge.
---
> 4: The assumptions (A1-A8) seem very strong.
In Section 3.5, before (A1-A8), we explicitly list and compare the assumptions used in prior works on benign overfitting. This comparison shows that our assumptions align with commonly used ones in the literature.
While the assumptions may seem complicated, the essential part is the relationships among $d$, $\\|\mu\\|_2$, and $n$. It is common to express parameter assumptions using big-O notation ignoring logarithmic dependencies [Cao+, 2022; Jiang+, 2024], but we present them explicitly.
Could you please share which specific assumption you find particularly strong compared to the existing studies? Based on your feedback, we can further elaborate on why it is necessary for our analysis.
---
> 5: Three distinct token groups and a consistent scale $\rho$ are implausible.
Building on previous research, we have carefully designed the analysis setting to better align with real-world scenarios.
The existing benign overfitting works commonly assume that the input image is split to two parts: signal and noise [Cao+, 2022; Kou+, 2023]. Similarly, in the prior studies on the attention dynamics, it is often assumed that each input is made of a single optimal token and the other irrelevant noise vectors that are orthogonal to signal [Tarzanagh+, 2023b; Jiang+, 2024], as summarized in Table 1.
Our study introduces several realistic elements to the analysis, including middle states termed "weakly relevant tokens" and non-orthogonality between signal and noise.
To show the correspondence with real-world datasets and strengthen the plausibility of data model, we provided an example of medical imaging in the paragraph from line 4105.
Finally, our analysis also holds if the scale $\rho$ varies across examples. **We have added this clarification after Definition 3.1.**
---
> 6: What is the motivation for using $g(x)$?
Due to the character limit of OpenReview, please refer to the second comment in Reviewer BNsU at the bottom. **We have made several updates to clarify the motivation throughout Section 4.1, not just for $g(x)$.**
---
Rebuttal Comment 1.1:
Comment: I appreciate your detailed response, which has addressed some of my concerns. However, I am still concerned about whether the findings can be extended to more complex models, such as multi-layer settings or incorporating additional nonlinear components (without fixing any parameters). Could the authors clarify whether their results hold in such settings, or provide experimental validation using these more realistic frameworks?
---
Update (09 Apr 2025)
Thank you for your detailed response and additional experiments in a more practical setting. Most of my concerns have been addressed, and I will increase my score to 3.
---
Reply to Comment 1.1.1:
Comment: Thank you for your further question. We are delighted that our reply has resolved some of your concerns. We hope that the following reply is fully satisfactory and will be reflected in your evaluation.
---
> I am still concerned about whether the findings can be extended to more complex models, such as multi-layer settings or incorporating additional nonlinear components (without fixing any parameters). Could the authors clarify whether their results hold in such settings, or provide experimental validation using these more realistic frameworks?
Based on your suggestion, **we have conducted additional experiments under a more practical model setting**.
Specifically, we considered a one-layer Transformer encoder that includes non-linear feedforward layers and layer normalization.
We conducted an experiment similar to Figure 3 to investigate the dynamics of token selection, and we observed the following:
1. Similar **benign overfitting was observed** depending on the relationship between the dimension $d$ and the signal strength $\\|\mu\\|_2$. In low-SNR settings, harmful overfitting was also observed (Figure 3(a)). However, under the same setting as in Figure 3(c), the model exhibited benign overfitting instead of not overfitting. This aligns with the intuition that increasing model capacity facilitates fitting to the training data.
2. In the benign overfitting case, **token selection for noisy samples progresses more rapidly**. The following table shows the dynamics of the softmax probability assigned to token $x_2$, which most aligns to label noise. The top row corresponds to the same run as Figure 3, b, below.
| Model \ Time step (iteration) | 50 | 100 | 200 | 300 | 400 | 500 | ~ | 1000 |
| - | - | - | -| - | - | - | - | - |
| Model in our analysis | 0.16 | 0.21 | 0.43 | 0.67 | 0.82 | 0.88 | | 0.96 |
| One-layer encoder | **0.44** | **0.74** | **0.86** | **0.89** | 0.89 | 0.91 | | 0.93 |
This result is consistent with the understanding from our analysis.
Our theoretical analysis proves benign overfitting under a stricter setting, where only the attention mechanism can be trained. In contrast, the additional experiment here allows the training of feedforward layers and classifier, which makes noise memorization possible in broader components. The ability to learn token scores dynamically enables a cooperative interaction between the token selection mechanism and the feedforward layers, which could lead to faster token selection as shown in the table.
Furthermore, **we have already conducted a heatmap-based experiment similar to Figure 4, and we have added this result to the appendix**.
Regarding the training loss, we observed very small values within the range shown in the plot. For the test loss, **we observed a similar boundary structure**, and in the generalizing region, the loss was smaller than that in Figure 4. We suppose that this result is due to the two main factors: i) the ability to learn output scaling and ii) the ability to distribute the effect of noise memorization not only within the attention mechanism but also to the feedforward layer.
While the experiments in the main paper, both synthetic and real-world, are designed to support our main theorem, the additional experiments here aim to provide further insights into the behavioral differences arising from joint optimization. We believe these new experiments not only address your concern but also strengthen the overall contribution of our paper.
Benign overfitting provides a theoretical explanation for the empirical observation that over-parameterized models can generalize well despite memorizing the training data. However, due to the difficulty of analyzing training set fitting and generalization without relying on uniform convergence, the existing theoretical studies have been limited to simplified model settings. These include linear models [Bartlett+, 2020; Chatterji & Long, 2021] and two-layer NNs or CNNs with fixed second-layer weights [Frei+, 2022; Xu & Gu,2023; Meng+, 2024; Cao+, 2022; Xu+, 2024]. Bridging the gap between such theoretical settings and practically used models remains an important direction for future research on benign overfitting.
---
**(Edit: 8 Apr, AoE)**
Thank you very much for your detailed and constructive feedback. We have carefully addressed each of your comments, and we hope that our responses sufficiently resolve your concerns. We would be truly grateful if you could kindly reconsider the evaluation score. We sincerely appreciate the time and effort you have invested in reviewing our work. | Summary: The paper studies benign overfitting in token selection within the attention mechanism, using a data model consisting of signal and noise and a one-layer attention model. This work characterizes the conditions under which benign overfitting, harmful overfitting, or no overfitting occurs.
## Update After Rebuttal
Most of my concerns were addressed during the rebuttal, and I increased my score to 2 (weak reject). However, I believe that additional reviews are necessary to determine whether the revised manuscript fully addresses the concerns raised by all reviewers. For this reason, I still lean toward rejection.
Claims And Evidence: The main theorem statement is clearly presented. While I did not have time to read the full proof in the appendix, I find that the proof techniques described in the main text lack motivation. That is, it is difficult for the reader to understand the intuition behind the chosen proof techniques.
Methods And Evaluation Criteria: N/A
Theoretical Claims: I did not check the full proof in the appendix and only read the proof techniques section in the main text. Due to its less-motivated presentation, I cannot confidently assess the correctness of the theorems.
Experimental Designs Or Analyses: I find the experimental results, especially the real-setting experiments, somewhat trivial and expected—showing that more data leads to better loss. I highly encourage the authors to include additional experimental results that better highlight the intuition behind their findings.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The paper studies benign overfitting in a more general data distribution and a more complex neural network architecture compared to previous works, such as Cao et al. (2022) and Oymak et al. (2023). This contribution broadens the understanding of benign overfitting in more general settings.
Essential References Not Discussed: I believe the paper includes most of the relevant works on benign overfitting and the theoretical aspects of attention and transformers. However, I found an incorrect reference. In line 107 (left), Yun et al. (2020) is cited as a related work on the training dynamics of transformers. However, this paper does not consider training dynamics; instead, it shows that transformers are universal approximators. I recommend the authors carefully review their citations, as there may be other misreferenced works that I did not notice.
Other Strengths And Weaknesses: The problem setting is novel and more general than in previous works. Additionally, the comparison to prior studies (Table 1) is helpful. However, there are several weaknesses that should be addressed:
- Some assumptions are strong and not well-motivated. For example, Assumption 3.3, (A1)–(A8), and the first two lines in Theorem 4.1 seem overly restrictive, and their significance is unclear.
- The proof techniques lack motivation. Section 4.1 is difficult to follow because it primarily consists of lemmas without sufficient explanation or intuition in the current draft.
Other Comments Or Suggestions: * Running head should be fixed
* I suggest that the authors provide the gradient update equations for $W$ and$p$ for the convenience of readers. Since these equations are not included in the current draft, it is difficult to follow the technical details. Also, I encourage the authors to clarify the reason why the technical assumptions are required in their analysis.
Questions For Authors: - In Theorem 4.1, what happens in “Not Overfitting” case after the time step $\tau$?
- In Theorem 4.1, I think time step $\tau$ should depend on uncertainty $\delta$ but it seems there is no such dependency.
- Why does $g(x) = 2x+2\mathrm{sing h}(x-\log T)$ appeare in the proof techniques?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your detailed review. We hope that our answer has addressed all your concerns and pray this reply will change your decision.
---
> 1: The proof techniques lack motivation. Section 4.1 lacks sufficient explanation or intuition in the current draft.
**We have added the following improvements to enhance motivation**. Due to the character limit of OpenReview, please refer to the second comment from Reviewer BNsU at the bottom for details on the updates.
- The overview of Section 4.1 after left column, line 308
- The reason for introducing Definition 4.4 after left column, line 310
- The motivation of the function $g(x)$ at line 329 (see also Comment 9 below)
- The brief explanation of how the dynamics of the attention gap are derived after Lemma 4.5
- The new section titled "Proof Sketch" before Section C
---
> 2: The real-setting experiments are somewhat trivial and expected—showing that more data leads to better loss. The authors should include additional experimental results that better highlight the intuition behind their findings.
Increasing the number of data containing label noise does not trivially lead to better loss. Depending on the dataset, fitting to label noise can result in worse generalization; for example, the AG-news dataset in Table 2 exhibits a harmful overfitting scenario.
To further improve the generality of experimental results, **we are adding the following new experiments to Section G.2:** softmax dynamics in real-world settings similar to Figure 3 and new tables similar to Table 2 for different label noise $\eta$.
---
> 3: Yun et al. (2020) is an incorrect reference.
You are correct, and we have already fixed it. Additionally, we have reviewed the references again to ensure there are no errors.
---
> 4: Assumption 3.3, (A1)–(A8), and the first two lines in Theorem 4.1 seem overly restrictive, and their significance is unclear.
**Assumption 3.3**
Our study is the analysis of benign overfitting in the **token selection mechanism**; thus, training the head $\nu$ is inappropriate as it obscures which part enables noise memorization and benign overfitting. Specifically, when we show benign overfitting, it is unclear whether this is because of the token selection inside softmax or simply a consequence of the linear model $\nu$, which has already been extensively studied (e.g., [Bartlett+, 2020]).
Assumption 3.3 is necessary to formulate the problem of token selection given the assigned token scores.
To clarify this point, **we have moved this part from Assumption to the Problem Setting**.
Finally, **we have added a new proposition after Appendix E** showing that a one-step optimization from a randomly initialized $\nu$ satisfies Assumption 3.3. This result supports the validity of the setup to some extent.
**(A1-A8)**
In Section 3.5, we explicitly list and compare the assumptions used in prior works on benign overfitting. This comparison shows that our assumptions align with commonly used ones in the literature.
Several papers express parameter assumptions using big-O notation ignoring logarithmic dependencies [Cao+, 2022; Jiang+, 2024], but we present them explicitly.
Could you please share which specific assumption you find strong compared to the prior studies? Based on your feedback, we can further elaborate on why it is necessary for our analysis.
**Assumption in Theorem 4.1**
The assumption $\\|\nu\\|_2 = O(1 / \\|\mu\\|_2)$ is made to ensure that the token scores remain at most of constant order. This corresponds to appropriately scaling down the network output, and satisfying the assumption is straightforward.
---
> 5: Running head should be fixed.
Thank you for your comment. We've already fixed it.
---
> 6: The authors should provide the gradient update equations for $W$ and $p$.
For gradient updates, they are provided in Equations 8 and 11 in Appendix B due to space limitations in the main text. **We have added the following sentence after the gradient update equations in Section 3.4:**
"The weight updates with specifically calculated loss gradients are provided in Appendix B."
---
> 7: What happens in “Not Overfitting” case after the time step $\tau$?
In the not-overfitting case, signal learning is dominant, and the model fits only the clean samples as stated in the theorem.
This is discussed in detail in Figure 2, the paragraph from line 291, and Appendix D.2.
---
> 8: Time step $\tau$ should depend on uncertainty $\delta$ but it seems there is no such dependency.
Uncertainty $\delta$ indirectly affects the time step $\tau$ through the parameter assumptions (A1)-(A8).
A similar order notation for the time step is also found in existing benign overfitting studies [Cao+, 2022; Jiang+, 2024].
---
> 9 : Why does $g(x)=2x+2\sinh(x− \logT)$ appear in the proof techniques?
Due to character limit of OpenReview, please refer to the second comment from Reviewer BNsU at the bottom. **We have added an explanation in the main text after line 329**.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Some of my concerns have been resolved, but I still have one remaining question.
>> 7: What happens in the “Not Overfitting” case after the time step $\tau$?
> In the not-overfitting case, signal learning is dominant, and the model fits only the clean samples as stated in the theorem. This is discussed in detail in Figure 2, the paragraph from line 291, and Appendix D.2.
I would like to clarify my understanding of this response. In the “Not Overfitting” case, is it correct that the training loss does not continue to decrease indefinitely, since the model does not fit the noisy samples?
In related literature, such as Kou et al. (2023), which considers a similar setting with noisy labels and Gaussian noise, it has been shown that models can eventually memorize the noisy data by fitting the noise. Could the authors clarify whether their setting fundamentally prevents this kind of memorization, or whether the training loss would continue to decrease if training were extended further?
In addition, I would encourage the authors to consider improving the presentation of the paper in future versions, especially by providing more detailed discussion of the technical assumptions and key terms used throughout the text.
Reference:
Kou et al., Benign Overfitting in Two-Layer ReLU Convolutional Neural Networks, ICML 2023.
---
Reply to Comment 1.1.1:
Comment: Thank you for your further questions. We sincerely hope that our response to the remaining questions below will fully resolve your concerns and be reflected in your evaluation of our paper.
---
> Could the authors clarify whether their setting fundamentally prevents this kind of memorization, or whether the training loss would continue to decrease if training were extended further?
You are absolutely right that the model does not fit the noisy samples in the "Not Overfitting" case, and therefore the loss does not continue to decrease. The CNN analysis in [Kou+, 2023] demonstrates the memorization of noisy data, corresponding to our analysis in the "Benign Overfitting" case. We show that the model memorizes noisy samples by fitting the noise in this case (please see Figure 2, right).
We emphasize that **our "Not Overfitting" case, characterized by $\text{SNR}^2 = \omega(n^{-1})$, is not within the scope of [Kou+, 2023]**. Specifically, Condition 4.1. 1 in their paper assumes a sufficiently large dimension $d$, imposing a stricter condition than our assumption (A1). Extracting the relationship among $d$, $n$, and $\\|\mu\\|_2$, their setting assumes $\text{SNR}^2 \lesssim n^{-1}$; thus, the regime we study in our "Not Overfitting" case: $\text{SNR}^2 = \omega(n^{-1})$ is not covered in their framework.
Our results successfully demonstrate distinct token selection scenarios across different SNR regimes **under more general assumption on $d$**.
---
> I would encourage the authors to consider improving the presentation of the paper in future versions, especially by providing more detailed discussion of the technical assumptions and key terms used throughout the text.
Thank you for your additional suggestion. In the final version of the paper, we are allowed to include one extra page, and we have added the followings to address your comments on the technical assumptions and key terms.
**Technical assumptions**
- We have added the following comment after line 253, right:
*"The assumption $\\|\nu\\| = O(1/\\|\mu\\|_2)$ in the theorem ensures that the token scores remain at most of constant order. This can be easily satisfied by appropriately scaling down the model output.”*
- For paragraph starting line 177 in the right side, we elaborated on why the condition of head $\mu$ is required for analyzing benign overfitting in token selection, as explained in our previous reply. Due to the space limitations of OpenReview, we omit the specific update here.
- To improve clarity, the discussion starting at line 189 in the right has been restructured into a new remark titled *"Relevance to practical scenarios".*
**Key terms**
- In response to your question about "Not overfitting", we added clarification after line 299 left:
*"The middle in Figure 2 corresponds to the not overfitting case, where signal learning dominates and the model does not fit noisy data. The right figure illustrates the benign overfitting case, where noise memorization becomes dominant."*
- To emphasize the inherent difficulty of the problem we study and highlight the novelty of our contributions, we added the following explanation after line 304, left:
*"This makes analyzing the token selection dynamics inherently challenging. Our analysis under label noise setting must account for competing training directions between signal learning and noise memorization (as seen in Figure 2), as well as between clean and noisy samples for signal learning. These balances depend on softmax probabilities and are not determined by pre-training quantities such as SNR or label noise $\eta$. This is a specific difficulty to the attention mechanism, which is absent in existing benign overfitting studies. For instance, depending on the convergence speed—how quickly $s(\tau)$ approaches 0 or 1—it is possible that even when label noise $\eta$ is small, the actual contribution to the weight updates at some time step can be dominated by noisy samples. This motivates us to carefully analyze the dynamics of softmax probabilities to evaluate the direction of these competing relationships."*
Regarding your initial concern about the less-motivated presentation of Section 4.1, we have already addressed this with concrete updates outlined in our first reply.
In the revised version, we have improved our presentation to deliver our contributions and motivations as clearly as possible within the page limit, while including the problem setting, main results, proof essence, and experiments. If you have specific parts you find particularly inappropriate or unclear, we would be happy to revise further.
---
**(Edit: 8 Apr, AoE)**
Thank you very much for your detailed and constructive feedback. We have carefully addressed each of your comments, and we hope that our responses sufficiently resolve your concerns. We would be truly grateful if you could kindly reconsider the evaluation score. We sincerely appreciate the time and effort you have invested in reviewing our work. | Summary: The paper develops a theoretical framework to analyze the dynamics and generalization properties of token selection in attention mechanisms under label noise, focusing on a one-layer attention network for binary classification. It demonstrates that with a high signal-to-noise ratio (SNR), the model selectively fits clean samples and generalizes well (not overfitting), while with a low SNR, the model overfits noisy training data yet still achieves low test error—a phenomenon known as benign overfitting. The analysis introduces key concepts such as the evolution of softmax probabilities and the "attention gap" metric, which quantify how the mechanism distinguishes between relevant and noisy tokens during training, and it highlights a delayed generalization process reminiscent of grokking. Extensive experiments on both synthetic and real-world datasets support the theoretical findings by showcasing transitions between harmful overfitting, benign overfitting, and non-overfitting regimes.
Claims And Evidence: The paper’s claims are largely supported by rigorous theoretical analysis and clear empirical evidence, with the real-world experiments being a particular highlight. However, when compared to Jiang et al. (2024), the paper appears relatively less impressive in terms of novelty and overall impact. Moreover, the comparison with Magen et al. (2024) is not sufficiently detailed; it would be beneficial to include a comprehensive comparison—preferably in Table 1—to clearly delineate the strengths and weaknesses of each approach in both the theoretical analysis and experimental validation.
Methods And Evaluation Criteria: The experiments are clear and can effectively support the theoretical results, with a primary focus on validating the theoretical claims.
Theoretical Claims: I reviewed the theoretical proofs and did not find any major issues overall. However, I have two points for improvement. First, the appendices frequently refer to a “good run” without a precise definition; clarifying what qualifies as a good run would be beneficial. Second, while the paper distinguishes between benign and harmful overfitting, it would be clearer to provide an explicit formal threshold or result that delineates the boundary between these two regimes.
Additionally, I feel that directly assuming a fixed $\nu$ (as in Assumption 3.3) is somewhat of a “shortcut”. This assumption sidesteps the challenge of analyzing the full dynamics when $\nu$ is also learned, and may limit the generality of the results. Providing further justification or exploring a more general setting ould strengthen the paper’s theoretical contributions.
Experimental Designs Or Analyses: I examined the experimental designs for both the synthetic and real-world experiments. The synthetic experiments are well-designed to validate the theoretical predictions.
Supplementary Material: I went through the appendix briefly and found the organization to be well-structured. The results presented in the supplementary material appear to be correct and effectively complement the main text.
Relation To Broader Scientific Literature: Overall, this is an interesting topic with some novel findings. The paper builds on and extends prior work on benign overfitting—studied in linear regression, two-layer neural networks, and kernel methods (e.g., Bartlett et al., 2020; Hastie et al., 2022; Liang & Rakhlin, 2020)—by applying these ideas to the attention mechanism in transformers.
While the topic and findings are compelling, the contribution is somewhat diminished in comparison with recent works like Jiang et al. (2024) and Magen et al. (2024), which also explore similar phenomena in vision transformers.
Essential References Not Discussed: I agree that the paper seems to have provided all the essential references needed to understand its context and contributions. The citations cover the key areas of benign overfitting, attention mechanism dynamics, and related analyses in transformers. No critical works appear to be missing.
Other Strengths And Weaknesses: Other strengths of the paper include its clear and well-organized presentation, as well as its rigorous approach that blends theoretical analysis with empirical validation. The work extends existing benign overfitting analyses to the context of attention mechanisms in transformers, which is a creative and timely direction given the widespread use of these models.
On the other hand, some weaknesses are apparent. The novelty is somewhat limited when compared to concurrent works such as Jiang et al. (2024) and Magen et al. (2024), which explore similar phenomena in vision transformers. Additionally, certain assumptions—like directly fixing $\nu$ (Assumption 3.3)—could be seen as a shortcut that may restrict the generality of the results. Clarifications regarding the definition of “good run” in the appendices and a more explicit delineation between benign and harmful overfitting regimes would also improve the paper's clarity and impact.
Other Comments Or Suggestions: I suggest that in Figure 1 the paper quantitatively indicate how large or small the values are (for example, by annotating the scales or ranges on the plot).
Additionally, in the second part of Theorem 4.1, it would be beneficial to explicitly state the lower bound for SNR to clarify the precise conditions under which benign overfitting occurs.
Questions For Authors: What is the specific role of label noise in your training process, and how does its presence complicate the dynamics compared to a scenario without label noise?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your detailed review. We hope that our answer has addressed all your concerns and pray this reply will change your decision.
We first answer your question because it is an important point relating to our contribution.
---
> 1: What is the specific difficulty of label noise setting?
The presence of label noise introduces a significant challenge due to the existence of **competing training directions within the same training run**. This results in two key difficulties:
1. Signal learning vs memorization in each token selection (Figure 2).
2. Clean samples vs noisy samples in the learning direction of class signals.
These challenges become even more difficult because the weight updates depend on the softmax probability $s(\tau)$ (Eqs. 8 and 11) and are NOT statically determined by pre-training quantities such as SNR or label noise $\eta$. This is **a fundamental difficulty that does not appear in previous analyses of benign overfitting** (e.g., two-layer NN [Frei+, 2022]).
For instance, depending on the convergence speed—how quickly $s(\tau)$ converges to $0$ or $1$—it is possible that even when label noise $\eta$ is very small, the actual contribution to weight updates at some time step can be dominated by noisy samples.
Thus, it is crucial to track the whole training dynamics of each token to analyze 1 and 2, making the analysis inherently difficult. A large portion of this paper is dedicated to addressing and resolving this issue.
In addition to the existing explanation from line 86, **we have added a new section titled "Difficulty of Label Noise Setting" in Appendix A**, incorporating an explanation similar to this reply.
---
> 2: The novelty and impact are limited compared to [Jiang+, 2024].
The above question highlights a challenge that is specific to the attention mechanism but is entirely absent in [Jiang+, 2024]. As discussed above, addressing this challenge is a major focus of our work and not a straightforward extension.
Additionally, we discuss finer differences starting from line 627 in the Appendix.
---
> 3: There is no sufficient comparison with [Magen+, 2024].
We discuss the differences from line 143 and a more detailed difference from line 650 in the Appendix, which clarifies the novelty of our work.
In response to the review, **we have added a row for [Magen+, 2024] in Table 1**.
Furthermore, since the Appendix previously only provided a theoretical comparison, **we have also newly included a comparison of experimental validation**, as suggested.
---
> 4: There is no definition of “good run”.
We have already defined "good run" precisely in Definition C.2 and have confirmed that this term is not used prior to this definition, including the main text.
---
> 5: There are some concerns about the threshold between benign and harmful overfitting.
Assumption (A2) determines the lower bound on SNR for benign overfitting as $\Omega(d^{-1/4})$. **We have added a clarification regarding this point below Theorem 4.1.**
Although this is not a continuous boundary, we discuss in Remark 4.2 that when $\text{SNR}^2 = o(d^{-1/2})$, the model exhibits harmful overfitting. This boundary also appears as a minimax generalization bound in [Xu and Gu, 2023].
Furthermore, the boundary is demonstrated through synthetic experiments (Figure 4, right).
In response to your comment, **we have explicitly annotated Figure 1 with the specific values** instead of "large" and "small".
---
> 6: Fixing head $\nu$ as in Assumption 3.3 limits the generality of the results.
Our study is the analysis of benign overfitting in the **token selection mechanism**; thus, enlarging the trainable components, including head $\nu$, unnecessarily would obscure which part enables noise memorization and benign overfitting. Specifically, when we show benign overfitting, it is unclear whether this is because of the token selection inside softmax or simply a consequence of the linear model $\nu$, which has already been extensively studied (e.g., [Bartlett+, 2020]). Our work establishes a novel result that benign overfitting can occur solely through the attention mechanism.
Assumption 3.3 is necessary to formulate the problem of token selection given the assigned token scores, rather than being a limitation. To clarify this point, **we have moved this part from Assumption to the Problem Setting**.
In this setting, our work is directed toward a more general token-selection problem instead of head training. We emphasize that analyzing token selection with label noise involves handling nontrivial softmax dynamics, dedicating a significant portion of our proofs to address this difficulty. The novelty and difficulty are explained in Comment 1 above and in Table 1 of the main text.
Finally, in response to your comment, **we have added a new proposition after Appendix E** showing that a one-step optimization from a randomly initialized $\nu$ satisfies Assumption 3.3, which supports the validity of this setting to some extent.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed responses. I believe most of my concerns have been addressed, and I have accordingly increased my score to 3. I look forward to the revised version as promised by the authors. | null | null | null | null | null | null |
On Expressive Power of Looped Transformers: Theoretical Analysis and Enhancement via Timestep Encoding | Accept (poster) | Summary: The paper study the limitation of hardmax 1-layer $r$-times looped transformer with fixed limited embedding size, extending Zhang et al. (2023) that studied looped ReLU. Furthermore, the paper suggest adopted layer-dependent scaling for looped transformer to mitigate the suggest limitation.
## update after rebuttal
### *After Discussion*
*I remain unsatisfied and disappointed following in-depth discussions with the authors. In all, I think the paper needs a big revision for submission and currently missing many important parts.*
1. ***Downplaying Fixed-Width Architecture**. In the title, abstract, contribution bullet points, and Table 1, the authors—intentionally or not—downplay / hide their use of a **fixed-width** transformer architecture, which, though adopted in some prior work, isn’t widely celebrated.*
2. ***Missing Comparison over the Proposed ‘Limitation’**. This is particularly striking given their second bullet point highlighting the limitations of looped transformers. They stress that a 1-layer looped transformer’s reliance on specific continuity coefficients drives its error, yet **they fail to compare this with non-looped transformers** of equivalent bit complexity (e.g., to what degree might non-looped designs mitigate this dependency?).*
3. ***Unsatisfactory Response w.r.t. a Related Work**. I cited Saunshi et al. (2025), which shows a 1-layer looped transformer overcoming the limitation the authors note (without explicitly fixing width). Their rebuttal emphasized parameter efficiency—an outcome they later concede wasn’t satisfactorily achieved and is not their story—and offered a flawed excuse for overlooking this key work on looped transformer expressivity. They claimed unavailability despite ICLR submissions being anonymously accessible for citation and reading well in advance. Since Saunshi et al. (2025) undermines author’s claim ‘looped transformer’s expressive power remains underexplored’ in their abstract and main story, it should not be absent.*
4. ***Missing Parts in Experiments**. Following my suggestion, they designed language tasks with varying continuity coefficients tied to their core theories in the rebuttal. However, these should have been conducted earlier, across more datasets and disturbance conditions, and **should have been** detailed in Section 5’s first part—yet they’re absent. Their further response on comparison fairness is unprofessional: I don’t expect a reduced base model parameter count as they conjectured, but the base + timestep total should match the without-timestep setup. Given their focus on this **limitation**, experiments with non-looped and k-layer looped counterparts are needed to fairly assess if these overcome it—perhaps all underperform with 1-layer looped + timestep. Moreover, the extent to which their encoding boosts parameter efficiency at equal performance levels warrants discussion but is currently absent.*
5. ***Failure of Deeper Discussions**. Top-conference rebuttals value discussions on potential topics and potential improvement of current techniques, as these enrich the Discussion Section or appendix, fostering its impact and welcoming impactful future-up work. I explicitly noted my suggestions—like k-layer looped transformers, wavelet analysis, and memory capacity—extend beyond their current scope, not expecting polished analyses now. While the authors acknowledge these as critical, they adopt a dismissive stance, avoiding deeper heuristic and brainstorming-style exploration based on current knowledge. Instead, they waste characters distancing these from their contributions—which I already understand—rather than engaging constructively as somebody eager to explore non-triviality. They admit their cube and ID-based analyses could improve, potentially easing model issues, yet resist a brainstorming-style discussion. Though they expressed “optimality is non-trivial,” I struggle to detect sufficient enthusiasm for making their techniques more realistic or elegant.*
Claims And Evidence: The paper’s claim regarding the limitations of looped transformer is not well-illustrated by experiments, but the effectiveness of the layer-dependent scaling strategies is validated by experiments on case studies.
Methods And Evaluation Criteria: The proposed Timestep Encoding method suggest layer-dependent scaling for loop transformer with specific design motivated by prior work. The author validated it through three case studies.
Theoretical Claims: The paper claimed that their hardmax 1-layer $r$-times looped transformer with fixed limited embedding size’s expressivity would depend on certain continuity coefficients, and their proposed layer-dependent scaling would mitigate this.
Experimental Designs Or Analyses: 1. No experiments showed that looped transformer really suffer from certain continuity coefficients. In the author experiments, since the width is the same, the comparisons between L=6 to r=8, or L=12 and r=10 show that TF is not obviously superior to Looped TF.
2. The effectiveness of layer-dependent encoding suits common intuition, which essentially cost more parameter to make each layer slightly distinct. This cannot validate the author’s claim that it alleviate the specific limitation.
Supplementary Material: N/A
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: Saunshi et al. (2025), a paper published in ICLR 2025, has shown that looped transformer can simulate non-looped counterpart while requiring a larger NN width.
Other Strengths And Weaknesses: **Strengths**
1. The paper’s logic flow is clear, with good illustrations.
2. The paper cast a table to compare different studies.
**Weakness**
1. The author missed some important reference such as Saunshi et al. (2025). In these studies as well as in real-world practice, looped transformers adopted $k$-layer transformer looped $L$ times, instead of 1-layer transformer looped $L$ ($r$ is the paper’s notation) times considered in this paper. With appropriate $k$, the looped transformer should have chance to overcome the drawback the paper discussed, and the looped transformer should be parameter efficient than non-looped counterpart, similar to literature like Saunshi et al. (2025).
2. The theoretical results based on constraining the hidden embedding size of transformer ($(17d+9)\times N$), which is very tricky in my view. The limitations of looped transformer (depending on continuity coefficients) that the author proved might be relaxed when the hidden embedding is not constrained. In fact, Saunshi et al. (2025) has shown that a 1-layer transformer looped $L$ ($r$ is the paper’s notation) times can simulate the non-looped counterpart at the cost of a larger embedding size. But still, looped transformer is more parameter-efficient.
3. The paper didn’t provide estimation error, which usually co-exist with approximation error result in similar well-established theoretical studies like Takakura and Suzuki (2023).
4. The paper considers replacing transformer’s softmax operation with hardmax, which in my view is a huge drawback. In fact, by appropriately choosing the QK matrice, the transformer could equate looped ReLU network, and the techniques in prior study that studied looped ReLU’s drawback could be directly applied (Zhang et al. (2023)) with some improvements without much effort. The potential method to use softmax can refer to my first point in “Other Comments Or Suggestions” section.
Other Comments Or Suggestions: 1. Considering the actual softmax operation would make the analysis more realistic, despite introducing additional technical challenges. However, it remains feasible with sufficient technical effort, as demonstrated in Lemma C.1–2 of Takakura and Suzuki (2023).
2. As discussed in Weakness 1, the discussions over $k$-layer $L$ looped transformer should be included to complement the real-world scenario.
3. The memorization of IDs might be more efficient using random feature transformation or even NTK (similar approaches adopted in Nichani et al. (2025)). In this merit, how far did your looped transformer from the Optimal Storage Capacity is worth discussing.
Questions For Authors: 1. Please respond to Weakness 2.
2. Can your cube & ID-based analyses (which induces $\sqrt{d}$ and the design of $\mathcal{K}$) be improved by advanced waveless analyses? Some case studies?
3. See “Experimental Designs Or Analyses” section. Can you design some language tasks that different tasks depend on different scale of your defined continuity coefficients to show the effectiveness of your theoretical claims?
*Reference*
Zhang et al. (2023). On enhancing expressive power via compositions of single fixed-size ReLU network. In Proceedings of the 40th International Conference on Machine Learning, 2023.
Saunshi et al. (2025). Reasoning with Latent Thoughts: On the Power of Looped Transformers. In International Conference on Learning Representations (ICLR2025), 2025.
Takakura and Suzuki (2023). Approximation and Estimation Ability of Transformers for Sequence-to-Sequence Functions with Infinite Dimensional Input. In International Conference on Machine Learning, pages 26517–26582. PMLR, 2023.
Nichani et al. (2025). Understanding Factual Recall in Transformers via Associative Memories. In International Conference on Learning Representations (ICLR2025), 2025.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: > Q1 (W2). The theoretical results are based on constraining the hidden embedding size of transformer $(17d+9)×N$, which is very tricky. In fact, Saunshi et al. (2025) has shown that a 1-layer transformer looped can simulate the non-looped counterpart at the cost of a larger embedding size, still, the looped transformer is more parameter-efficient.
We respectfully point out that, **in contrast to Saunshi et al. (2025), our results achieve parameter efficiency** by showing that **the parameter complexity is $\boldsymbol{O(d)}$**. The construction by Saunshi et al. (2025) depends on the number of layers and width of the target non-looped transformer. Since the number of layers or width in a non-looped transformer typically grows with the approximation error or sequence length, the corresponding looped transformer inherits the dependence, which is not parameter-efficient.
> **Theorem 5.2 (Saunshi et al., 2025):** For any transformer with $L$ layers, at most $R$ distinct, $d$ embedding size, $d_{FF}$ hidden dimension for MLP ... loops a 1-layer transformer block for $L$ times, with $d + R + 2$ embedding size, $R{h_{FF}} + O(L)$ hidden dimension.
We will clarify this distinction in Section 2 (Background) and emphasize our contribution in Section 3.3. (Main Result):
**line 158:** Saunshi et al. (2025) showed that Looped Transformers can approximate standard Transformers, ..., their construction inherits this dependence, limiting parameter efficiency.
**line 207:** **The parameter complexity is $\boldsymbol{O(d)}$**, depending solely on the input dimension, not the approximation error or sequence length, highlighting the parameter efficiency.
> W1-1. The author missed some important references.
As the work was not publicly available at the time of submission, it will be cited in the camera-ready version.
> W1-2. With appropriate $k$, the looped transformer should have chance to overcome the drawback the paper discussed ...
We respectfully point out that achieving universal approximation **even with a single-layer** is a non-trivial theoretical contribution. Exploring how efficiently multiple layers fall under the study of non-looped Transformers. In contrast, our work investigates how the approximation rate depends on the number of loops. We will add as future work.
> W3. The paper didn’t provide estimation error, which usually co-exists with approximation error ...
To the best of our knowledge, estimation error bounds are not commonly discussed in studies on approximation (Yun 2020, Kim 2023, Kajitsuka 2024, Jiang 2024). This is likely due to a difference in problem settings: estimation error is analyzed in regression. We will add this as a future direction.
> W4-1. The paper considers replacing the softmax operation with hardmax, which in my view is a huge drawback.
We use the softmax function to **approximate** (not replaced) the hardmax function by scaling input (Yun et al., 2020; Kim et al., 2023). To avoid confusing, we will revise as:
**Before:** we use the hardmax instead of the softmax
**After:** we use the softmax to **approximate** the hardmax
> W4-2. The techniques of looped ReLU’s (Zhang et al., 2023) could be directly applied without much effort.
We respectfully point out that the techniques from ReLUs (Zhang et al., 2023) cannot be directly applied to Transformers due to fundamental differences in architecture and target function class, that is, transformers had to compute **contextual mappings**. It required newly defined continuity and techniques. We will clarify as:
**line 127:** Our question is whether the result (Zhang et al., 2023) can be extended **for contextual mappings**
> Q2. Can your cube & ID-based analyses be improved by advanced waveless analyses? C3. The memorization of IDs might be more efficient ...
Optimality is non-trivial. Existing proof techniques (e.g., Kajitsuka 2025) do not directly apply to looped architectures, and new methods are needed. We will include this as an important direction for future work.
> Q3. Can you design some language tasks that different tasks depend on different continuity coefficients to validate theoretical claims? No experiments showed that they suffer from continuity coefficients and cannot validate the author’s claim that timestep alleviates the specific limitation.
We designed a perturbed WikiText-103 task (10% token randomly replacement) to test sensitivity to continuity. We further analyzed the continuity coefficients of the trained models by applying perturbations to the input and measuring the change in output embeddings. We found that timestep encoding, with only about 5% increase in parameters, improved memorization and consistently reduced continuity coefficients. Without it, the model was less stable and failed to capture accurate input-output relationships.
| | CE Entropy Loss (Train) | Continuity Coefficients |
|--|--|--|
| w/o | 4.32 | 130.6 |
| w/ Timestep | **4.18** | **21.5** |
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' detailed response.
- W1.2
The parameter efficiency of the loop transformer should **be contrasted with a non-loop transformer**. I’m not expecting a 1-layer loop transformer to achieve Optimal Storage Capacity (see my Q3.2 comments below). The manuscript, titled “On **Expressive Power** of Looped Transformers…,” avoids emphasizing the bit complexity, and the whole main body doesn’t seem to celebrate its bit complexity bound as satisfactory. **However**, when I respectfully noted that the **expressive power** of a 1-layer loop transformer might overcome the “specific limitation inherent to the loop structure” from the second contribution listed, the authors countered with parameter efficiency.
- W1-1
ICLR submissions are publicly available anonymously via OpenReview before publication, and ICML papers often cite such work appropriately in this manner. If the authors truly wish to highlight their bit complexity, they’ve overlooked **at least** two key studies: Nichani et al. (2025) (on arXiv since 9 Dec 2024) and Allen-Zhu and Li (2025) (on arXiv since 8 Apr 2024).
- W1.1
I maintain that discussing the k-layer L looped transformer is **valuable for both theoreticians and practitioners**. I’m not expecting a polished analysis (which could be future work), but **just** a discussion—worthy of a dedicated section in the main body or appendix to boost the paper’s impact, and welcome follow-up work. Since the k-layer L looped transformer is more practical than the 1-layer version, it’s worth exploring how k can mitigate the limitation the authors identified. Additionally, whether your Timestep Encoding is more efficient deserves further empirical illustration in an additional experimental section.
- W3
If your results on expressive power and bit complexity have limitations and fail to convince me of **Pareto Optimality** in their trade-off, an estimation error analysis for the looped transformer would be compelling if it achieves min-max **optimal**ity for non-parametric regressions.
- Q3
The gap between 4.32 and 4.18 isn’t striking to me, and I don’t see why the authors didn’t fairly compare them by adjusting parameters (e.g., increasing w/0's parameters by 5% or decreasing w/ Timestep's parameters). I value this experiment, but it should be conducted before and conducted on more datasets and disturbance conditions. Besides, since this kind of experiment can truely co-relate to your main theories, it **should have been** included in the first part of Section 5 with detailed discussion—yet it’s currently absent.
- Q2
My review was clear: I’m simply inviting **just** a discussion based on your settings, not demanding a fully developed improvement. A thoughtful discussion could enhance the main body or appendix, drawing researchers for follow-up work and elevating the paper’s impact. I don’t understand the authors’ resistance.
- C3
On parameter efficiency, the authors declined to even **just** discuss “how far your looped transformer is from Optimal Storage Capacity” in my Q3—which is disappointing—clearly I wasn’t expecting you to match Kajitsuka and Sato (2024). I struggle to sense the authors’ passion for non-triviality.
*Reference*
Nichani et al. (2025). Understanding Factual Recall in Transformers via Associative Memories. In International Conference on Learning Representations (ICLR2025), 2025.
Allen-Zhu and Li (2025). Physics of Language Models: Part 3.3, Knowledge Capacity Scaling Laws. In International Conference on Learning Representations (ICLR2025), 2025.
Kajitsuka and Sato (2024). On the Optimal Memorization Capacity of Transformers. In International Conference on Learning Representations (ICLR2025), 2025.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your thoughtful and constructive comments.
> W1.2. The parameter efficiency of the loop transformer should be contrasted with a non-loop transformer. ... when I respectfully noted that the expressive power of a 1-layer loop transformer might overcome the “specific limitation inherent to the loop structure” from the second contribution listed, the authors countered with parameter efficiency.
We respectfully point out that the reviewer may have overlooked the fact that **our problem setting is constrained to fixed-size transformers**, as the same for looped ReLU’s (Zhang et al., 2023). Our study focuses on analyzing the approximation rate with respect to the number of loops. In this regard, the results by Saunshi et al. (2025) do not conform to our setting, as their model does not have a fixed size.
As you correctly noted, we are **not** emphasizing bit complexity as a key contribution. We only discussed it in response to the reviewer’s question.
Thanks to your comment, we will avoid presenting parameter efficiency as a highlighting, and instead clarify that it is a part of our problem setting.
**Before(line 207):** The parameter complexity is $O(d)$, ..., highlighting the parameter efficiency.
**After:** While the number of parameters is fixed, the function can still be approximated by increasing the number of loops.
> W1-1. If the authors truly wish to highlight their bit complexity, ...
We would like to clarify that neither bit complexity nor parameter efficiency is a core contribution of our work. Rather, they define **the constraints of our theoretical setting**. Our emphasis on these aspects was intended only to address the reviewer's concerns.
> W1.1 k-layer L looped transformer is valuable for both theoreticians and practitioners
We sincerely agree with this comment. We simply wished to clarify that our goal is to analyze the approximation rate with respect to the number of loops, even with a constrained architecture (e.g., a single-layer model), and that increasing the number of layers represents a slightly different research direction from our primary focus.
Furthermore, we acknowledge that it is generally difficult to theoretically analyze constant-factor improvements. Nonetheless, we agree that this is an important direction and have added it to the discussion of future work in the revised manuscript.
> W3 If your results on expressive power and bit complexity have limitations and fail to convince me of Pareto Optimality in their trade-off, an estimation error analysis for the looped transformer would be compelling if it achieves min-max optimality for non-parametric regressions.
We agree that establishing minimax optimality would be an interesting direction. However, our current scope is limited to deriving approximation rates, and it remains unclear whether results from such a different problem setting would meaningfully complement our rate-based analysis.
> Q3 The gap between 4.32 and 4.18 isn’t striking to me, and I don’t see why the authors didn’t fairly compare them by adjusting parameters (e.g., increasing w/0's parameters by 5% or decreasing w/ Timestep's parameters).
Our theoretical result shows that the approximation rate can be improved by adding a specific module. Therefore, we do not believe it is necessary to reduce the number of parameters in the base model when incorporating the proposed module.
While we understand the concern that performance gains may stem from an increase in the number of parameters, we would like to highlight that in the dynamic programming task, the accuracy is improved from $47.7$% to $88.3$%, a substantial gain.
> Q2 I'm simply inviting just a discussion based on your settings. Can your cube & ID-based analyses be improved by advanced waveless analyses?
Unfortunately, due to the 5000-character limit in the response format, we were unable to provide a full discussion and prioritized addressing the main points. To clarify, our study is focused on deriving approximation rates for function classes, whereas the reviewer’s suggestion appears more aligned with a discussion of memorization capacity.
To the best of our knowledge, there are no prior works that utilize waveless analyses to improve ID-based analyses in terms of memorization capacity. Since no concrete example was provided, we find it difficult to assess how such approaches might contribute to our specific setting.
> C3. On parameter efficiency, the authors declined to even just discuss “how far your looped transformer is from Optimal Storage Capacity”
We respectfully point out that we responded to this point: the reason we cannot discuss how far our model is from the Optimal Storage Capacity is that this quantity is not currently known for looped models. Since a lower bound has not yet been derived, it is inherently impossible to evaluate the gap from the optimal capacity. | Summary: The paper theoretically investigates the expressive power of Looped Transformers, neural architectures that recursively reuse a single Transformer block, which are appealing for their parameter efficiency and generalization capabilities. The authors define a new modulus of continuity tailored to sequence-to-sequence functions to establish the approximation rate of Looped Transformers. Their analysis identifies a specific limitation of the looped architecture due to weight-tying constraints. To address this, they propose introducing scaling parameters conditioned on timestep encoding, thereby enhancing the expressive power of the architecture. Empirical results across various reasoning tasks and standard Transformer benchmarks validate the theoretical findings, demonstrating improved performance with increased loop counts and further gains achieved through timestep encoding.
Claims And Evidence: - Claim: Looped Transformers can universally approximate continuous permutation-equivariant sequence-to-sequence functions.
- Evidence: Proven through a theorem (Theorem 3.6), supported by mathematical proof utilizing newly defined modulus of continuity.
- Claim: Looped Transformers face inherent limitations due to their weight-tying nature, impacting their expressive power.
- Evidence: Demonstrated analytically (Lemma 4.1), which highlights limitations in approximating specific target mappings exactly due to recurrent feed-forward layer constraints.
- Claim: Incorporating timestep encoding with scaling parameters mitigates the limitations and improves approximation capabilities.
- Evidence: Theoretical justification (Theorem 4.2) shows exact memorization capability with time-dependent scaling, and experiments validate improved performance on reasoning and standard Transformer tasks.
Methods And Evaluation Criteria: The methods and evaluation criteria proposed in the paper are suitable. The authors use well-established theoretical frameworks (modulus of continuity, universal approximation results) and practical machine learning benchmarks (reasoning tasks, dynamic programming problems, Sudoku, in-context learning, language modeling) to validate their theoretical insights.
Theoretical Claims: Theoretical Claims:
- Looped Transformers are universal approximators for permutation-equivariant sequence-to-sequence functions (Corollary 3.7).
- Introducing timestep encoding and scaling parameters can overcome certain theoretical limitations specific to Looped Transformers (Theorem 4.2).
The theoretical claims appear correct. The proofs provided are logically structured, detailed, and sound. The innovative use of three separate moduli of continuity to evaluate approximation capabilities is well justified.
Experimental Designs Or Analyses: Experiments included:
- Reasoning tasks (Sudoku, Countdown, Edit Distance, Longest Common Subsequence).
- In-context learning task.
- Language modeling on WikiText-103 dataset.
The experimental design is sound, with clear benchmarks appropriate for evaluating reasoning capabilities and general transformer-related tasks. The experimental analyses demonstrate clear performance improvements due to the proposed timestep encoding.
Supplementary Material: I reviewed the supplementary material. Specifically, I reviewed the detailed mathematical proofs of theorems provided in Appendix A, which support the main results presented in the paper.
Relation To Broader Scientific Literature: The paper relates to broader scientific literature through its theoretical examination of expressive power, aligning closely with previous works on sequence-to-sequence function approximation for Transformers (Yun et al., 2020). It extends existing results to weight-tied Looped Transformers, providing new insights into their capabilities and limitations. The introduction of timestep encoding aligns with existing literature on conditional scaling and adaptive methods in neural network architectures.
Essential References Not Discussed: The paper references most relevant prior works thoroughly.
Other Strengths And Weaknesses: Strengths:
- Clearly stated theoretical contributions with rigorous proofs.
- Novel and insightful identification of a limitation in Looped Transformers.
- Introduction of timestep encoding is a simple yet effective conceptual innovation, clearly validated by experiments.
Weaknesses:
- Additional computational overhead due to timestep encoding may diminish some practical advantages of looped architectures regarding parameter efficiency.
Other Comments Or Suggestions: n/a
Questions For Authors: Could you elaborate on whether and how the increase in computational complexity (due to timestep encoding) affects practical deployment scenarios compared to standard Transformers?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > Q1 (W1). Could you elaborate on whether and how the increase in computational complexity (due to timestep encoding) affects practical deployment scenarios compared to standard Transformers?
Thank you for the insightful comment. While timestep encoding adds slight computational overhead, the increase in parameters is minimal, and we observed only a minor impact on throughput. Prior work [1,2] has also explored architectural modifications to enhance expressiveness without sacrificing efficiency, and we believe our design follows this practical direction.
**Reference:**
[1] Csordás et al., "MoEUT: Mixture-of-Experts Universal Transformers", NeurIPS 2024.
[2] Bae et al., "Relaxed Recursive Transformers: Effective Parameter Sharing with Layer-wise LoRA", ICLR 2025. | Summary: This paper studies Looped Transformers from a theoretical perspective. Its first main contribution is to prove a universal approximation property, showing that a wide class of sequence functions can be represented by Looped Transformers. The main technical contribution here, compared to existing universal approximation results for transformers, is to show that this can be achieved with coupled parameters, i.e., looping a single fixed transformer layers. A second contribution is to propose timestep encoding, which shows improved empirical performance on a few tasks.
## update after rebuttal
The authors have promised to address the points I raised.
Claims And Evidence: The claims made in the Abstract, Introduction, and Conclusion are supported. I believe the stated theorems to be correct.
One issue is that the theoretical analysis uses the hardmax operation (line 106, left column), which is a nontrivial abstraction and may have expressiveness implications [1]. I think this should be mentioned when describing the paper's overall contributions (abstract, introduction) as in other work making such assumptions [e.g. 2].
[1] Yang et al, Simulating Hard Attention Using Soft Attention, arXiv 2024
[2] Amiri et al, Lower Bounds for Chain-of-Thought Reasoning in Hard-Attention Transformers, arXiv 2025
Methods And Evaluation Criteria: Reasoning tasks used for evaluation make sense.
Theoretical Claims: I read through the proofs in the appendix, and checked the individual steps of proof of Theorem A.1 in detail. The later proofs are plausible to me, but I did not check every single step.
Experimental Designs Or Analyses: I checked the soundness of the experiments on reasoning tasks as described in Appendix C.
Supplementary Material: None
Relation To Broader Scientific Literature: The paper is related to both the theoretical literature on Looped Transformers, and literature on the universal approximation property of transformers. It brings these together by showing the latter kind of property under the constraints imposed by the looped transformers architecture.
Essential References Not Discussed: None, as far as I see.
Other Strengths And Weaknesses: Weaknesses:
- Of note, unlike a lot of other literature on the expressive power of transformers [e.g. 1,2,3], the present paper follows Yun et al in only studying functions on fixed-length inputs, without any regard to performance across inputs of potentially unbounded length. This restriction can be fine, but I think this should be made more transparent earlier in the paper (e.g., abstract, intro).
- Regarding clarity: I found the terminology surrounding "sequence IDs" and "token IDs" confusing in Definition 3.2. At this point, a reader may wonder: Is a Token ID something like a word embedding? or a positional embedding? or something different? And what exactly is a sequence ID -- what kind of object even is it -- a number, a tensor, or something else? I think the reader should be given a clearer idea at this point.
[1] Strobl et al, What Formal Languages Can Transformers Express? A Survey, TACL 2024
[2] Hahn, Theoretical limitations of self-attention in neural sequence models, TACL 2020
[3] Merrill and Sabharwal, The Expressive Power of Transformers with Chain of Thought, ICLR 2024
Other Comments Or Suggestions: line 88, right column: "can multiple graph" a word such as "perform" is missing after "can"
line 192, right column: "Looped Transformers is defined by" -- delete the word "is", as there is also a verb in the second-to-last line of the Corollary.
Section 5.1: The abbreviations LCS and ED and the mapping between the tasks listed in the text is not made explicit. The reader needs to refer to the appendix to figure this out.
line 571: \delta^{-1} \in mathbb{N}, \delta \geq 2 -- should the second \delta be \delta^{-1}?
In the proof, I found it hard to track when p-norms are applied to individual vectors (as in equation 49) or across the entire input space (as in equation 46, or in the line below equation 59). Can the authors make the distinction more explicit?
Line 1302: "combined" --> "combining"
Questions For Authors: In the hardmax operation, how are ties resolved? I.e., when multiple positions receive the maximum attention score.
Line 155 (right column): where is bit complexity used? It is introduced but then doesn't appear to show up.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > W1. Unlike a lot of other literature, the present paper follows Yun et al in only studying functions on fixed-length inputs. This restriction can be fine, but I think this should be made more transparent earlier in the paper
We respectfully point out that the study of **function approximation** and memorization commonly assumes fixed-length inputs (Yun 2020, KIm 2023, Kajitsuka 2024, Jiang 2024). On the other hand, the prior work cited by the reviewer focuses on computational complexity rather than function approximation. Nevertheless, we agree that this distinction is important. Accordingly, we will revise the introduction to make this assumption more explicit.
**line 45:** Our contributions are summarized as follows: We establish the approximation rate of Looped Transformers for **fixed-length** continuous sequence-to-sequence functions.
> W2. Regarding clarity: I found the terminology surrounding "sequence IDs" and "token IDs" confusing in Definition 3.2.
We agree that the terminology in Definition 3.2 could be made clearer. We will explicitly clarify in the definition that the sequence IDs and token IDs are integers. Furthermore, we will explain the motivation: they are defined for a constructive proof to describe contextual mappings as Kim et al. (2023). Specifically, we will revise the definition and add the following explanation as:
**line 175:** In our proofs, we rely on the *contextual token ID* to describe contextual mappings.
**Definition 3.2.** A *token ID* is a **unique integer** assigned to each token. A *sequence ID* uniquely identifies each sentence. A *contextual token ID* uniquely identifies a specific token within a specific sentence.
This notion is defined in Kim et al. (2023), to which we refer for further details, for constructive proofs of contextual mappings. The actual construction of contextual token IDs may vary depending on the specific proof. In our case, we adopt a different construction from that of Kim et al. (2023).
> Q1. In the hardmax operation, how are ties resolved? I.e., when multiple positions receive the maximum attention score.
**Such cases result in errors, but their errors diminish as approximation improves.** As you rightly noted, when multiple positions have identical values, the hardmax operation cannot distinguish between them. In our proof, we explicitly evaluate the measure of such regions and include them in the error term, $\mathcal{O}(\delta^{d})$. As the discretization becomes finer ($\delta\to 0$), the proportion of these regions decreases and eventually becomes negligible. To clarify this point, we will add the following sentence:
**line 329:** ..., where $\mathcal{O}(\delta^{d})$ arises from the case where identical tokens appear in sequences.
> Q2. Line 155 (right column): where is bit complexity used? It is introduced but then doesn't appear to show up.
The bit complexity is part of the result stated in Theorem 3.6. However, we agree that its role is not clearly connected at the point of introduction. To improve clarity, we will revise the text (line 155) to explicitly indicate where the bit complexity appears in the subsequent analysis.
**Before:** We evaluate bit complexity, which denotes the maximum number of bits of weights.
**After:** We evaluate bit complexity ... weights. **Our theoretical results show how the approximation rate depends on both the number of parameters and the bit complexity**.
> C1. One issue is that the theoretical analysis uses the hardmax operation (line 106, left column), which is a nontrivial abstraction and may have expressiveness implications. I think this should be mentioned when describing the paper's overall contributions (abstract, introduction) as in other work making such assumptions.
While theoretical analysis with the hardmax operation is a common approach in function approximation (Yun et al., 2020; Kim et al., 2023), we agree that it constitutes a strong assumption. To clarify this, we will revise as:
**Before:** we use the hardmax function instead of the softmax function.
**After:** Our constructive proof **relies on the softmax function to approximate the hardmax function** as in previous work (Yun et al., 2020; Kim et al., 2023).
> O1. In the proof, I found it hard to track when p-norms are applied to individual vectors or across the entire input space.
We appreciate the comment. We will clarify, in the camera-ready version, the distinction between vector $p$-norms and function $L^p$-norms, using $|f|_{L^p}$ for the latter, as defined in Equation (2).
**Reference:**
Yun et al., "Are Transformers universal approximators of sequence-to-sequence functions?", ICLR 2020.
Kim et al., "Provable Memorization Capacity of Transformers", ICLR 2023.
Haotian Jiang and Qianxiao Li, "Approximation Rate of the Transformer Architecture for Sequence Modeling", NeurIPS 2024.
Tokio Kajitsuka and Issei Sato, "On the Optimal Memorization Capacity of Transformers", ICLR 2025.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors. These clarifications will improve the paper. | Summary: The authors conduct theoretical analysis on the expressive power and approximation rate of looped transformers, a variant of transformers with weight tying across layers and unbounded recurrence. The analysis led to some limitations of looped transformers on function approximation, especially in memorizing input-output pairs. The authors also propose a way to fix this by adding a scaling mechanism conditioned on the loop index, whose effectiveness is empirically validated.
## update after rebuttal
The rebuttal helps complement the draft and addresses some of my concerns. I will maintain my original score, which leans positive overall.
Claims And Evidence: Most claims are theoretical and the correctness depends on the proof. Another claim on how adding timestep encoding should improve performance is empirically validated.
Methods And Evaluation Criteria: Yes the methods and evaluation make sense.
Theoretical Claims: I could understand the high level proof strategies which make the conclusions plausible, but didn't check the proof correctness due to limited time and expertise in theoretical analysis.
Experimental Designs Or Analyses: The experiments are on tasks repeated adopted in prior work, and the settings are sound.
Supplementary Material: I briefly checked the datasets and the tasks used for experiments. They help provide extra details.
Relation To Broader Scientific Literature: The paper mostly extends the current theoretical understanding of looped transformer on its parameter efficiency and approximation abilities, and how they compare with standard transformers.
Essential References Not Discussed: I believe deep equilibrium models are closely related to looped transformers, but the authors don't seem to discuss them.
Other Strengths And Weaknesses: While it's good to understand the expressive power of looped transformers, it's a bit unclear how/whether the ideal weight configurations could be learned through gradient-based optimization. Looped transformers are notorious hard to train stably, and there are various tricks that people have been using such as only computing the gradient at the last several recurrence steps, etc. It would be great if the analysis could provide insights into how to better train these models.
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > I believe deep equilibrium models are closely related to looped transformers, but the authors don't seem to discuss them.
Thank you for the suggestion. We agree that Deep Equilibrium Models are related to Looped Transformers. We will add the following sentence to the related work section:
**line 77:** Deep equilibrium models (Bai et al., 2019), which compute fixed points of iterative layers, are also related.
> W1. While it's good to understand the expressive power of looped transformers, it's a bit unclear how/whether the ideal weight configurations could be learned through gradient-based optimization. Looped transformers are notorious hard to train stably, and there are various tricks that people have been using such as only computing the gradient at the last several recurrence steps, etc. It would be great if the analysis could provide insights into how to better train these models.
Thank you for the thoughtful comment. We did an additional analysis, specifically, we empirically observed that the trained models tend to exhibit smaller continuity coefficients with timestep encodings. Since smaller continuity coefficients (such as Lipschitz constants) are often associated with more stable training dynamics, our results suggest that the proposed approach may also contribute to improved trainability of Looped Transformers. We will include this point as an important direction for future work:
**line 439:** Beyond expressivity, improving training stability and efficiency, especially for larger numbers of loop iterations, is an important direction for future work.
**Reference:**
Bai et al., "Deep Equilibrium Models", NeurIPS 2019. | null | null | null | null | null | null |
Fragments to Facts: Partial-Information Fragment Inference from LLMs | Accept (poster) | Summary: The paper proposes a new privacy threat for fine-tuned LLMs called “Partial-Information Fragment Inference” (PIFI). The authors show that even if an attacker only knows a few scattered keywords about someone’s data (e.g., certain medical terms), they can still prompt the model to uncover additional, sensitive details. They develop two simple yet effective attacks (a likelihood-ratio approach and one called “PRISM”). Experiments in both medical and legal settings confirm that fine-tuned models can leak private fragment-level information, underscoring the need for more robust privacy defenses.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Experimental design is sound.
Supplementary Material: No, it is mainly code.
Relation To Broader Scientific Literature: Building on membership inference (Shokri et al.) and memorization (Carlini et al.), the paper generalizes data leakage to weaker assumptions, showing that even scattered fragments can reveal private details.
Essential References Not Discussed: As far as my knowledge no.
Other Strengths And Weaknesses: Strength:
* A realistic assumptions on adversary attacks with partial fragments.
* Acknowledging different ablations like more number of epochs, comparing models with different parameter sizes, etc.
Weakness:
* Classifier is considered as the baseline for experiments. Are there any other method in the literature that you can make the comparison with?
* You claim the TPR at low FPR isn’t random and is truly meaningful, matching a classifier’s performance. But how do we know the model isn’t just capturing domain-wide patterns (like “cold” usually co-occurring with “cough”) rather than genuine memorization at the individual level? Is there an experiment showing it’s not merely picking up generic associations from the entire dataset? That part still confuses me when considering a single datapoint’s data.
Other Comments Or Suggestions: Please look at above.
Questions For Authors: Question1: Classifier is considered as the baseline for experiments. Are there any other method in the literature that you can make the comparison with?
Question 2: You claim the TPR at low FPR isn’t random and is truly meaningful, matching a classifier’s performance. But how do we know the model isn’t just capturing domain-wide patterns (like “cold” usually co-occurring with “cough”) rather than genuine memorization at the individual level? Is there an experiment showing it’s not merely picking up generic associations from the entire dataset? That part still confuses me when considering a single datapoint’s data.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > **Classifier is considered as the baseline for experiments. Are there any other methods in the literature that you can make the comparison with? / The paper should compare its attack success rate against existing MIA or extraction attacks on the same dataset/model.**
This is an excellent question / comment. To our knowledge, no directly comparable baselines operate under the same weak adversarial assumption as PIFI. Traditional extraction and membership inference attacks require full, ordered samples and much more information (e.g., around 200 tokens), whereas PIFI only assumes access to a small set of unordered fragments (approximately 20). For this reason, we include a classifier -- leveraging ground truth labels -- as a strong, data-aware baseline for performance, against which we compare our data-blind methods (LR-Attack and PRISM). Although we reference likelihood ratio–based methods (e.g., Carlini et al. 2022a) for context, those approaches target a fundamentally different threat model. For instance, Carlini et al. report a 1.4% TPR at 0.1% FPR using 256 shadow models.
While their success rates are in a similar range, direct comparisons are challenging due to significantly differing assumptions. Our goal is to demonstrate that successful extraction is possible even under much weaker assumptions, rather than to claim that PIFI is inherently stronger. We agree that further contextualization alongside prior MIA work would be valuable and will expand on this discussion in the revised paper.
> **(Related question from Reviewer zpv2) Why do we observe that legal setting attacks are more challenging?**
We’d be happy to expand on the legal results in the revised paper, drawing from Appendix B.3 (T6). In the legal summarization task, target fragments often include more common language, including crimes or legal terms that are likely present in general pre-training data. This dilutes fragment-specific signals compared to the medical setting, where terms like “daunorubicin” are more domain-specific and comparatively rare. As a result, attack performance is lower in the legal domain, though still above chance. Interestingly, LR-Attack performs best here, likely due to its heightened sensitivity to rare fragments, which helps in a setting where most targets are otherwise common.
> **But how do we know the model isn’t just capturing domain-wide patterns (like “cold” usually co-occurring with “cough”) rather than genuine memorization at the individual level? / Clarify whether the proposed algorithm leverages the frequency of different conditions?**
These are good questions. Our experiments indicate that the attack isn’t simply leveraging generic domain-wide associations but is also capturing signals specific to individual samples. For example, PRISM’s performance nearly matches that of a data-aware classifier with ground-truth labels. If the model were only exploiting common co-occurrence patterns (e.g., “cold” with “cough”), then the target and shadow models would yield very similar probabilities, and the likelihood ratio would not offer the discriminative power we observe. Moreover, our memorization tests (see Appendix Section E, Table 6) further confirm that fine-tuning leads to the memorization of individual samples.
At the same time, our approach does take advantage of statistical co-occurrence: if an adversary knows that a record contains a fragment like “hypertension,” and the fine-tuned model was trained on data where “hypertension” frequently co-occurs with “osteoporosis,” the model assigns a higher probability to that associated fragment. LR-Attack captures this by comparing the target and shadow model probabilities, while PRISM refines the inference by incorporating a general “world” probability to discount associations that are common across the domain. This combination of mechanisms ensures that our attack can distinguish genuine individual-level memorization from mere generic co-occurrence patterns.
> **(Related question from Reviewer zpv2) In Sec. 4.2, how unique is the given fragment s?**
In the medical setting, we targeted 4,302 fragments, 1,034 of which were unique. Common fragments included "pain" (124), "fever" (69), "shortness of breath" (55), "diarrhea" (47), and "chest pain" (44). However, 75% of fragments appeared fewer than five times, and 47% were targeted only once, indicating that most were highly specific. For instance, single-targeted examples include "Vincristine," "colitis," "daunorubicin," "Naprosyn," "Xalatan," and "lumbar spinal stenosis." | Summary: This paper introduces a new threat model for extracting sensitive data from fine-tuned LLMs using only partial, unordered fragments. It proposes two data-blind attacks: a Likelihood Ratio Attack and PRISM, which refines inference using an external prior. Experiments in medical and legal domains show these attacks effectively extract private information, rivaling data-aware classifiers.
Claims And Evidence: The paper's claims are generally supported by experimental results and theoretical justifications, demonstrating that fine-tuned LLMs are vulnerable to fragment-based inference attacks. However, some aspects need further validation: (1) PRISM's effectiveness relies on strong assumptions about likelihood ratios; (2) attack robustness against different fragment types is underexplored; (3) generalization beyond medical and legal datasets is limited.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for assessing fragment-based privacy risks in fine-tuned LLMs, with relevant medical and legal datasets. However, broader evaluation across different fine-tuning strategies and domains would strengthen the analysis.
Theoretical Claims: The theoretical claims are mostly correct, with LR-Attack and PRISM grounded in established statistical methods. However, PRISM's reliance on prior-based adjustment assumes a strong correlation that may not always hold in real-world cases.
Experimental Designs Or Analyses: The experimental design is mostly sound, with evaluations on fine-tuned LLMs using medical and legal datasets. The attack success rates at low FPRs support the paper’s claims. However, the analysis lacks a deeper examination of different fragment types, fine-tuning strategies, and broader domain applicability. More ablations on attack robustness across diverse data distributions would improve validity.
Supplementary Material: I have reviewed all the material submitted by the authors.
Relation To Broader Scientific Literature: The paper extends prior work on membership inference and memorization attacks by introducing PIFI, which extracts sensitive data from fine-tuned LLMs using unordered, partial fragments. LR-Attack builds on likelihood ratio-based inference, while PRISM refines it with a prior to reduce false positives. This contributes to LLM privacy research, highlighting new risks in data leakage and adversarial inference.
Essential References Not Discussed: The paper discusses relevant prior work on **membership inference and memorization attacks**, but it could benefit from citing additional studies on **fine-tuning vulnerabilities, differential privacy, and fragment-based adversarial attacks**. Below are some references that strengthen the discussion:
1. **Carlini et al. (2022a)** – *Membership inference attacks from first principles* (IEEE S&P 2022)
2. **Nasr et al. (2023)** – *Scalable extraction of training data from (production) language models* (arXiv 2023)
3. **Shokri et al. (2016)** – *Membership inference attacks against machine learning models* (IEEE S&P 2017)
- One of the **earliest works on membership inference**, forming the basis for **later LLM privacy attacks** like **PIFI**.
Other Strengths And Weaknesses: #### **Strengths**
- The paper introduces a **novel threat model (PIFI)** that extends **membership inference** to cases where the adversary has only **unordered, partial fragments** of sensitive data, making it a significant contribution to **LLM privacy research**.
- The proposed **PRISM method** refines inference using an **external prior**, which improves **robustness against false positives** compared to standard likelihood ratio-based attacks.
- The evaluation in **medical and legal domains** highlights **real-world risks** of fine-tuned LLMs, making the findings **practically relevant**.
#### **Weaknesses**
- The paper lacks **a deeper analysis of why certain fragments are more extractable**, making the attack’s effectiveness less predictable across different datasets.
- **PRISM's assumptions about prior distributions** may not always hold in real-world data, potentially limiting its **generalizability**.
- The study does not provide a **detailed discussion on defenses**, such as differential privacy or fine-tuning strategies that could mitigate fragment-based extraction risks.
Other Comments Or Suggestions: The paper would benefit from **more experiments and a broader range of datasets** to further validate the effectiveness and generalizability of the proposed methods.
Questions For Authors: 1. **How do you differentiate between high-confidence ID data and OOD data?**
- Since both can yield high likelihood scores, what mechanisms ensure that the extracted fragments truly belong to the training data rather than general knowledge?
2. **How does fine-tuning strategy impact attack effectiveness?**
- The paper evaluates both full fine-tuning and LoRA, but how do other adaptation methods (e.g., RLHF, prompt-tuning) influence vulnerability to PIFI attacks?
3. **How does the model’s exposure frequency to training data affect attack performance?**
- The results suggest that more fine-tuning epochs increase the risk of fragment inference. Have you explored a threshold where this effect stabilizes?
4. **What are the practical implications for deployed LLMs?**
- Given that the attack applies to fine-tuned models, how should organizations balance fine-tuning benefits with the privacy risks posed by PIFI?
5. **Would adding noise to model outputs significantly reduce attack success?**
- The paper mentions that prompt noising has limited impact. Have you tested structured noise approaches, such as adversarial perturbations or differential privacy, as defenses?
6. **How robust is the attack against existing defense methods?**
- There are many established defense techniques against privacy attacks. How does PIFI perform against **differential privacy, knowledge distillation, adversarial training, or gradient masking**? Have you evaluated its robustness under strong defenses?
Ethics Expertise Needed: ['Privacy and Security']
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: > **...assumptions about prior distributions may not always hold in real-world...**
We appreciate the comment. PRISM does make an assumption about the usefulness of a prior world model probability in adjusting the likelihood ratio (assumed to be correlated with sample membership), a premise grounded in statistical principles (see Section 4.2) and supported by prior work [e.g., Carlini et al. 2022a]. Additionally, we have empirically validated this assumption through controlled experiments with a fully specified trigram model (Appendix C) and large-scale LLMs in both medical and legal settings (Section 7, Appendix B). In practice, PRISM reduces false positives compared to LR-Attack, showing that even an approximate prior can provide meaningful signal. Moreover, PRISM performs competitively with, and sometimes better than, a data-aware classifier baseline, suggesting its potential to generalize beyond our setup.
> **Missing citations.**
We discuss all three of these key prior works in our paper already. Carlini et al. (2022a) is cited when introducing membership inference and extraction attacks and motivating our LR-Attack. Nasr et al. (2023) is referenced in our review of scalable extraction methods, supporting our focus on fragment-based extraction in fine-tuned models. Shokri et al. (2016) is acknowledged as foundational in membership inference and helps contextualize our extension of prior threat models. We’d be happy to highlight these works more prominently in the revision.
> **...deeper analysis of why certain fragments are more extractable...**
This is a great question. As noted in our response to Reviewer zpv2, for the medical setting, we targeted 1,034 unique fragments. 47% of those fragments occurred only once. Below, we provide results for the subset of fragments that occur only once, compared to the subset that occurred multiple times:
**Results on target fragments occurring only once:**
|Method|TPR@2%FPR|TPR@5%FPR|ROCAUC|
|--|--|--|--|
|Classifier|10.8%|13.7%|0.65|
|LR-Attack|17.5%|34.4%|0.77|
|PRISM|2.8%|4.2%|0.57|
**Results on target fragments occurring multiple times:**
|Method|TPR@2%FPR|TPR@5%FPR|ROCAUC|
|--|--|--|--|
|Classifier|7.1%|17.0%|0.74|
|LR-Attack|2.5%|5.3%|0.61|
|PRISM|5.6%|13.5%|0.73|
This result highlights how incorporating the world_model prob with PRISM significantly improves the sensitivity to more common medical terms, whereas the LR-Attack is very sensitive to rare fragments. We’ll make sure to highlight this result in the revised version of our paper, and add a more in-depth discussion.
> **How does PIFI perform against differential privacy, etc.?**
Thank you, this is an excellent question. **Please see our response to Reviewer m6fY, where we report results on our attack under differential privacy.**
> **How to differentiate between ID and OOD data?**
The LR-Attack could be susceptible to ID vs. OOD data, as you suggest. This motivated the PRISM approach, which uses the world model probability as a prior to adjust the likelihood ratio (target versus shadow), discounting fragments that are common in general knowledge. In principle (see Appendix C for the Synthetic data results, validating this heuristic), PRISM ensures that only fragments with an unusually high likelihood -- beyond what is expected from generic associations -- are flagged as memorized from the training data.
> **How does fine-tuning strategy impact effectiveness?**
Our paper explores this along two dimensions: number of fine-tuning epochs and fine-tuning strategy (full fine-tuning vs. LoRA). We find that more epochs increase TPR at fixed low FPRs, showing that repeated exposure heightens memorization and vulnerability (Section 7.2). While LoRA models are less vulnerable than fully fine-tuned ones, they still leak information, indicating that parameter-efficient methods reduce but don't eliminate privacy risks (Section 7.3). We’re happy to emphasize these findings more in the revision.
> **How does exposure affect attack performance?**
Our experiments show that increasing the number of fine-tuning epochs consistently increases the risk of fragment inference, as demonstrated in Section 7.2. We have observed that attack performance improves continuously with additional epochs. We will add a more in depth analysis to the revised version of the paper, where we step across epochs in greater granularity.
> **Practical implications for deployed LLMs?**
Our work shows that fine-tuning increases memorization risks, even when only partial fragments are exposed. Organizations must weigh the performance gains of fine-tuning against the risk of fragment leakage. Mitigations include using privacy-preserving techniques (e.g., differential privacy), restricting model access, and monitoring for leaks. Ultimately, domain experts should guide these privacy-performance trade-offs through informed risk assessments.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed rebuttal. While I appreciate the authors’ empirical efforts and clarifications regarding PRISM and its underlying assumptions, I still find the evaluation of defense methods lacking in depth, particularly in relation to privacy-preserving mechanisms such as differential privacy.
Although the authors mention differential privacy briefly in the rebuttal (referring to Reviewer m6fY), there is insufficient analysis on how PRISM performs under strong privacy guarantees or how its assumptions hold when differential privacy is actively applied. Since privacy leakage is central to the paper's motivation, I believe that a more thorough and systematic evaluation in this regard is essential.
As such, due to the limited treatment of this important aspect, I will maintain my original score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback. We understand and appreciate your concern regarding our evaluation of defense methods -- specifically, the interplay between our proposed PIFI framework and differential privacy (DP) guarantees. **Due to space constraints in our rebuttal, we were unable to include the full range of our DP experiments (which tested ε values of 0.3, 1, 3, 9, and 27 on the Llama 3.2 3B model).** We will include these extensive results in the revised paper, as noted previously in our rebuttal.
Furthermore, though we agree that understanding the impact of DP on our attacks is important for settings where developers are mindful of the possibility of such attacks, **we also note that fine-tuning LLMs using DP frameworks is far from the norm in practice, in part because of the significant technical burden of training recent high-performing open LLMs with large batch sizes on a single GPU with a large amount of VRAM** (80GB is often needed for fine-tuning with sufficient batch sizes for 8B-parameter models, even after using techniques like qLoRA to reduce memory requirements). This remains a central issue for DP-fine-tuning, as it necessitates large capital expenditures that many organizations are not willing to commit. It is even a problem that we, as researchers who study these threat models and defenses, have had to contend with in extending our experiments to include models trained under DP.
Thus, while important, adequate resources for effective DP fine-tuning remain difficult to access for many developers and organizations, leaving many fine-tuned LLMs vulnerable to a threat model such as PIFI. This is to say nothing about the sometimes significant utility trade-offs that scare away many practitioners who fear that using DP will harm their outputs. For an excellent survey that touches on many of these challenges, see Miranda et al. 2024 (https://arxiv.org/pdf/2408.05212). **We believe this further motivates presenting PIFI, as both a real attack and also a cautionary tale to increase the uptake of DP when fine-tuning LLMs.**
Finally, **we’ll note that many prior memorization/extraction attack papers have not incorporated differentially private training or fine-tuning at all in their experiments** -- for example, Staab et al. (2024 ICLR, https://openreview.net/pdf?id=kmn0BhQk7p), Schwinn et al. (2024 Neurips, https://arxiv.org/abs/2402.09063), and Yu et al. (2023 ICML, https://proceedings.mlr.press/v202/yu23c/yu23c.pdf). Additionally, in works that *do* include results under differentially private training, such as Fu et al. (2024 Neurips, https://openreview.net/pdf?id=PAWQvrForJ) and Lukas et al. (2023 S&P, https://arxiv.org/pdf/2302.00539), they include only a brief section on differential privacy with representative results.
We adopt a similar perspective to the authors of these papers: **even if DP fine-tuning does reduce the vulnerability of LLMs to our attacks (which it does, and we agree that this is important to understand), that does not diminish the practical importance of the PIFI framework and attacks like PRISM for the vast majority of fine-tuned LLMs,** which do not fine-tune under DP, most commonly due to cost considerations and the lack of awareness of the potential benefits of doing so / fear about the utility cost of introducing noise during fine-tuning. Thus, our revised paper will present in-depth DP results for the Llama 3.2 3B model under PIFI, while deferring a more comprehensive comparison for future work, given the broad scope of the paper already (introducing a new threat model + proposing methods + in-depth evaluations and ablations).
Even though it appears unlikely that the DP results we have added during the rebuttal will change your mind, **we appreciate your perspective and hope that the above discussion coupled with our inclusion of the additional DP experiments in the revised paper will sufficiently address your concerns.** Thank you again for engaging with us during the rebuttal phase. | Summary: The increasing development of large language models (LLMs) has resulted in different explorations of their trustworthiness properties. Amongst them, privacy is one of the key concerns, where prior research has shown that LLMs are prone to leaking sensitive training data through memorization and membership inference attacks. One of the main bottlenecks with existing works is that adversarial attackers assume complete access to the training samples or some ordered prefixes. In this work, the authors explore a novel direction for testing the vulnerability of LLMs when adversarial attackers have access to only partial and unordered sample information. In particular, the authors propose LR-Attack and PRISM threat models and show that fine-tuned LLMs are susceptible to fragment-specific extraction attacks. Using small datasets from medical and legal domains, the authors show the effectiveness of the proposed attacks. While the questions raised by the authors are interesting, the paper lacks rigorous evaluation and misses key insights (please refer to the sections below for more details). Overall, the paper reads well and the authors have explored an interesting direction!
## update after rebuttal
Thank you for your detailed rebuttal response and for addressing my concerns. I will stick with my weak accept rating as it's unclear why smaller models will overfit such a dataset and the proposed LR-Attack and PRISM will outperform the classifier baseline.
Claims And Evidence: The claims are supported by experimental results but they are not conclusive and haven't been thoroughly evaluated. Please see below for some open questions and refer to the "Methods and Evaluation Criteria" for more details.
i) In Sec. 4.2, how unique is the given fragment $s$?
ii) The classifier baseline states that it should always achieve the best score as it uses a ground-truth label, but still, in Sec. 7, we observe that LR-Attack and PRISM outperform the classifier baseline. It would be great if the authors could explain these results further.
iii) How transferrable are these attacks to other open- or closed-source LLMs?
iv) Algorithm 1 describes the use of the decision threshold. Is this threshold different for different samples in a given dataset?
v) How would these attacks perform when tested on safety-tuned models or models fine-tuned with privacy constraints?
vi) Intuitively, it seems that the proposed algorithm is leveraging the frequency of different conditions (in the medical dataset) that occur together, i.e., hypertension and osteoporosis frequently occur together in medical notes. Can the authors clarify this?
Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense within the context of understanding whether LLMs are vulnerable to adversaries who only have partial and unordered sample information. However, the evaluation of the proposed methodology is not very strong. For instance,
i) the authors test their proposed attacks using two small datasets (medical summarization and legal setting), which contain 312 and 235 test samples, respectively.
ii) the authors mostly use small language models in the range of 3-8B parameters, which raises the question of the utility of the proposed attack on large language models (like Llama 13b and other models in higher parameter range).
iii) The main results compare LR-Attack and PRISM with a random-guess baseline, where the proposed methods obtain an AUC of ~0.65, on average. Are these results conclusive to showcase the effectiveness of the proposed algorithms?
iv) There are no results to show the robustness of the method against existing defense techniques.
v) The results for the legal settings are interesting, but the authors do not discuss them in detail, e.g., why do we observe that legal setting attacks are more challenging? Similar arguments for other results (like in 7.7), where the authors do not provide any explanation of the obtained results.
vi) The authors propose to use LightGBM for the classifier baseline but do not provide the accuracies obtained by LightGBM on the classification task. Since the experiment section doesn't comprise any existing baselines, it would be beneficial to have a range of highly predictive classifier baselines to show the effectiveness of LR-Attack and PRISM.
vii) In Figures 4-5, why does LR-Attack obtain an AUC lower than random guess?
viii) In Appendix F, the authors mention that they take the mean of the probabilities output by Llama-3.1-8B-Instruct, Mistral-7B-v0.2, and Gemma-2B-IT LLMs. Why not use an ensemble of large open-sourced models as the world model?
Theoretical Claims: NA
Experimental Designs Or Analyses: Yes, I read the experimental setup and thoroughly reviewed the analysis.
Supplementary Material: I reviewed the entire supplementary material.
Relation To Broader Scientific Literature: The paper has a key contribution to the broader scientific literature of memorization and membership inference by proposing threat models that can break LLMs using partial and unordered information.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: The paper presents an interesting perspective in attacking large language models using partial-information fragments.
Other Comments Or Suggestions: NA
Questions For Authors: Please refer to the "Claims And Evidence" and "Methods And Evaluation Criteria" for more details.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > **In Sec. 4.2, how unique is the given fragment s?**
**Please see our response to Reviewer rGXA, where we give exact details on uniqueness.**
> **Why does LR-Attack and PRISM sometimes outperform classifier baseline?**
This is a great question and we’ll include this discussion in the revised paper — while the classifier baseline benefits from ground-truth labels and can, in theory, exploit signals more effectively, its performance may degrade due to overfitting, especially when fragment distributions are highly variable. In contrast, LR-Attack and PRISM use data-blind decision rules based on probability ratios (and a Bayesian update in PRISM), without learning from labeled data. This makes them more robust to overfitting. As a result, in settings with high distributional variation, these rigid decision rules can sometimes outperform the classifier.
> **Attacks transferable to other open- or closed-source LLMs?**
We evaluated our attack on a range of open models and found it consistently effective across architectures and parameter scales (see Appendix B1, Figure 7). The transferability to closed models, however, is currently limited by the need for echo functionality on logprobs — previously available in some APIs (e.g., OpenAI’s) but now removed due to security concerns. Future work could explore alternatives, such as methods proposed by Finlayson et al. (2024) to obtain logprobs from closed models.
> **Is this threshold in algorithm 1 different for different samples in a given dataset?**
Good question, we’ll clarify this in the revised paper. No, the threshold is fixed for each strategy based on the desired sensitivity, similar to prior membership inference attacks [Carlini et al. 2022a; Duan et al. 2024]. For instance, on a converged Llama 8B model, the PRISM threshold is 0.081, and the LR-Attack threshold is 1.78 when targeting a 2% FPR.
> **Test on models fine-tuned with privacy constraints?**
Thank you for the excellent suggestion. **Please see our response to Reviewer m6fY, where we evaluate our attack under a differential privacy.**
> **Clarify whether the proposed algorithm leverages the frequency of different conditions?**
Thank you for this observation. **Please see our response to Reviewer rGXA, where we discuss frequency / co-occurrence**
> **…test using small datasets…**
While the test datasets are relatively small (312 samples for medical and 235 for legal), our attack evaluated over 4,302 target fragments, providing strong evidence for the feasibility of the PIFI threat model. We plan to expand our evaluation to larger datasets in future work to further validate these results.
> **…what is utility of the proposed attack on larger LLMs?**
Thank you for the suggestion; we have evaluated our attack on a 70B-parameter Llama-3.3 model fine-tuned for 10 epochs, which is the largest model we could finetune given our compute constraints. The results below show that the attack remains effective even at this larger scale:
|Method|TPR@2%FPR|TPR@5%FPR|ROCAUC|
|-|-|-|-|
|Classifier|4.9%|11.4%|0.67|
|LR-Attack|5.2%|10.2%|0.64|
|PRISM|4.2%|11.3%|0.65|
We will include this result in our revisions.
> **Why do we observe that legal setting attacks are more challenging?**
**Please see our response to Reviewer rGXA, where we discuss the legal setting in detail.**
> **…do not provide LightGBM accuracies on the classification task + range of performant classifiers...**
We'd be happy to report accuracies for the LightGBM classifier baseline + other classifier baselines (e.g., XGBoost, FT-Transformer etc.). For example, the Llama-3 8B model attack had accuracy (0.77) and F1 score (0.25); however, we believe in our focus on ROC AUC and TPR (which we do report on the classifier baseline) as it reflects the utility of our LR-Attack and PRISM methods in low-FPR regimes (consistent with [Carlini et al. 2022a]).
> **…Figures 4-5, LR-Attack obtains an AUC lower than random guess?**
This typically means the scoring function is inversely correlated with true membership, and happens when assumptions about target vs. shadow model behavior break down, such as when both assign similar probabilities or the fragment is very common, skewing the ratio. This is one motivation for PRISM.
> **…ensemble of large open-sourced models as the world model?**
Great suggestion. We have now experimented with using DeepSeek-v3/r1 world probabilities (averaged), and found that this higher-quality model did improve performance. These results suggest that adding more high-quality open-source models to the world model ensemble could further enhance our approach.
Below is the performance of the Llama 3 8B model with DeepSeek world probabilities (10 epochs):
|Method|TPR@2%FPR|TPR@5%FPR|ROCAUC|
|--|--|--|--|
|Classifier|7.0%|13.4%|0.69|
|LR-Att.|5.3%|10.6%|0.64|
|PRISM|5.2%|11.6%|0.7| | Summary: This paper introduces a new privacy threat model, Partial-Information Fragment Inference (PIFI), which examines how adversaries can extract sensitive information from LLMs using only small, unordered text fragments rather than full training samples. Unlike traditional memorization or membership inference attacks, which assume strong adversarial access (e.g., full samples or ordered prefixes), PIFI explores a weaker but more realistic scenario where attackers infer hidden details from publicly available fragments. The authors propose two data-blind attack methods: Likelihood Ratio Attack (LR-Attack) and PRISM (Posterior-Refined Inference for Subset Membership), which leverage statistical techniques to infer missing private fragments. Their empirical evaluation on fine-tuned LLMs in medical settings shows that these attacks can successfully extract sensitive information with a non-trivial success rate, even with limited adversarial knowledge. The study highlights vulnerabilities in fine-tuned LLMs and suggests that existing defenses focusing on memorization or membership inference are insufficient. The authors emphasize the need for new mitigation strategies before deploying LLMs in sensitive domains like healthcare or law.
Claims And Evidence: The claims in the submission are largely supported by empirical evidence, but some areas may require further clarification or stronger justification. Here’s an analysis of key claims:
Claim: Adversaries can extract sensitive information from LLMs using unordered fragments.
Potential issue: The paper presents two attack methods (LR-Attack and PRISM) and evaluates them on fine-tuned LLMs in a medical setting. The results show non-trivial success rates, supporting the claim that fragment-based inference is feasible. The generalizability of this finding beyond the tested fine-tuned models is unclear. The study should discuss whether the attacks work on non-fine-tuned, general-purpose LLMs.
Claim: PIFI is a more realistic threat model than prior work on memorization and membership inference.
The motivation for PIFI is reasonable—real-world adversaries may only have access to partial data.
Potential issue:
The paper should compare its attack success rate against existing membership inference or extraction attacks on the same dataset/model to substantiate its claim of PIFI being a stronger or more practical threat.
Claim: The proposed attacks (LR-Attack and PRISM) are effective.
Methods And Evaluation Criteria: Measuring fragment reconstruction accuracy and attack success rate is logical.
Potential Issues: It is unclear whether the paper accounts for trivial guesses—does the attack outperform a simple "most likely completion" heuristic?
The claim that existing defenses may not fully prevent PIFI attacks is interesting and relevant.
The paper should explicitly evaluate common privacy-preserving techniques. If tested defenses are ineffective, discussing why they fail would add depth to the analysis.
The PRISM attack is not completely clear to me. A more intuitive explanation would help.
Theoretical Claims: NA
Experimental Designs Or Analyses: The experimental design is reasonable. The proposed attack methods are well-motivated and align with plausible LLM privacy vulnerabilities.
Supplementary Material: Looks fine to me overall. I did not read the scripts and code.
Relation To Broader Scientific Literature: The key contributions of the paper build upon prior work in LLM privacy risks, membership inference, and data extraction attacks. Specifically, it extends findings in the following areas:
- Data Memorization in LLMs and Membership inference
- Attacks based on statistical inference
- Prompt-Based Extraction Attacks:
Essential References Not Discussed: NA
Other Strengths And Weaknesses: See above sections
Other Comments Or Suggestions: Better clarity in writing will help specially in technical sections
Questions For Authors: See previous sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > **…discuss whether the attacks work on general-purpose LLMs**
Thank you for raising this important point. Our work specifically focuses on sensitive domains (e.g., hospitals or legal) where models are adapted on private data, and fine-tuning is often necessary because organizations cannot or do not wish to train a full-scale foundational model on sensitive data. Consequently, the vulnerabilities we expose are most pertinent to these fine-tuned models. Additionally, prior work (e.g., [Duan et al. 2024]) shows that membership inference attacks on foundation models are significantly more challenging. as such they’re trained on broad datasets and memorize less. Additionally, our method depends on building a shadow model, a step that is often impractical for large, non-fine-tuned models due to their scale and complexity. We also note, though, that our fine-tuned models remain performant on general-purpose benchmarks like MMLU, suggesting they can be used in manner similar to other general-purpose models, as discussed further in our response to reviewer wf5m.
> **…compare its attack success rate against existing MIA or extraction attacks...**
Thank you for the comment; some other reviewers brought up similar points. **Please see our response to Reviewer rGXA, where we discuss MIA performance relative to our PIFI threat model.**
> **…unclear whether the paper accounts for trivial guesses… does attack outperform a "most likely completion" heuristic?**
Thank you for the suggestion. To clarify, by “most likely completion” do you mean a baseline that considers a greedy or Monte Carlo generation of the next token? We’d happy to include another baseline, please clarify and we will add it in the revised paper.
> **…explicitly evaluate common privacy-preserving techniques.**
This is a great suggestion. **We have added experiments evaluating differentially private (DP) finetuning on the Llama 3.2 3B Instruct model using sample-level DP (via the dp-transformers package with ε = [0.3, 1, 3, 9, 27] over 10 epochs).** We include only the ε = 3 results here due to space constraints, but will add full results in the revised paper. This table shows task performance for the base model, the DP fine-tuned (FT) model, and the non-private FT model:
|Metric|BaseModel|DPFTModel($\epsilon=3.0$)|Non-Priv.FTModel|
|--|--:|--:|--:|
|ROUGE-Lsum|0.0963|0.0969|0.1004|
|BERTScoreF1|0.7140|0.7186|0.7299|
DP FT slightly improves performance over the base model but does not reach the performance of non-private FT, which aligns with expectations [Lukas et al. 2023].
We further evaluated the PIFI threat model and attacks on the DP FT LLM:
|Method|TPR@2%FPR|TPR@5%FPR|ROCAUC|
|--|--:|--:|--:|
|Classifier|4.4%|10.0%|0.64|
|LR-Attack|0.9%|2.4%|0.51|
|PRISM|4.0%|9.6%|0.54|
Here, DP fine-tuning reduces the success of the LR attack (0.9% TPR at 2% FPR), indicating the DP mechanism protects against this attack. However, both the classifier baseline and PRISM still achieve roughly twice the TPR at fixed low FPRs. We consider a potential explanation:
Sample level DP guarantees for LLM finetuning ensure $\sum_{i=1}^T \ell_i(x_{1:i}) \leq \epsilon$ where $\ell_i(x_{1:i}) = \log \frac{\Pr(x_i \mid x_{1:i-1}, D)}{\Pr(x_i \mid x_{1:i-1}, D')}$ (Yu et al., 2021). However, note that this guarantee of privacy loss can be unevenly distributed: for example, consider a two-token case with $\frac{\Pr(x_1 \mid D)}{\Pr(x_1 \mid D')} = e^\epsilon$ and $\frac{\Pr(x_2 \mid x_1, D)}{\Pr(x_2 \mid x_1, D')} = 1$; the total privacy loss $L(x_1, x_2) = \epsilon + 0 = \epsilon$, satisfying the guarantee, but the entire privacy budget is focused on a single token! In summary, with standard DP finetuning **individual tokens can incur nearly $\epsilon$ loss if others compensate.** Further analysis + other DP approaches would be promising for future work.
> **…PRISM attack is not completely clear to me…more intuitive explanation would help**
Thank you, we’ll include this discussion in the revised paper if helpful. PRISM builds on the standard LR attack by asking: how surprising is it that the target fragment appears, given what we expect in general? LR attack compares the target model (which has seen the sample) and shadow model (which hasn’t) probabilities to detect memorization of a fragment. However, this ratio can be high simply because a fragment is common in a domain. PRISM corrects by incorporating a “world model” that estimates the general likelihood of a fragment. In essence, PRISM performs a Bayesian update — it adjusts LR score with a prior that reflects how likely the fragment is in any sample. If "smoker" frequently co-occurs with "osteoarthritis," a high LR score might not indicate memorization; by using the world model probability, PRISM tempers the score -- if "smoker" is common overall, the adjusted score is lower, reducing FPs. We validate this with a small, controlled synthetic setup (Appendix C), where PRISM outperformed the basic LR attack. | Summary: This paper introduces a novel Partial-Information Fragment Inference (PIFI) threat model that examines the potential for sensitive data extraction from LLMs using only unordered, publicly available fragments of information. Two data-blind attack methods are proposed: LR-Attack (likelihood ratio-based) and PRISM (posterior-refined), which aim to infer whether a private fragment was part of an individual’s data used in model fine-tuning. Experiments in medical and legal domains show that even limited adversarial knowledge enables meaningful privacy breaches, highlighting new privacy vulnerabilities in LLMs.
Claims And Evidence: The key claim is that LLMs fine-tuned on sensitive data are vulnerable to fragment-level privacy attacks under weak assumptions. This is supported by empirical results showing non-trivial TPR at low FPR in both medical and legal domains using the proposed methods. However, the results, while statistically significant, are modest in absolute terms (e.g., ~10% TPR @ 2–5% FPR), raising questions about practical impact. Furthermore, while the authors argue that such attacks are plausible in real-world settings, the practicality and prevalence of the assumed attacker capabilities remain somewhat speculative.
Methods And Evaluation Criteria: The paper uses realistic datasets (e.g., MTS-Dialog for medical records), a clear delineation of attack models, and fair baselines (e.g., Classifier as a data-aware oracle). Evaluation metrics such as TPR@FPR and AUC are appropriate, and the inclusion of shadow and world models demonstrates a careful design. The novel aspect lies in relaxing the assumption of access to complete training samples, aligning PIFI more closely with real-world scenarios where attackers may only have limited context.
Theoretical Claims: No formal proofs are presented.
Experimental Designs Or Analyses: The experiments are well-structured and thorough, including multiple models (e.g., LLaMA, Qwen, Mistral), LoRA variants, and varying fine-tuning depths. The ROC curve analysis is detailed, and sensitivity studies (e.g., fragment noise, model scale) are insightful. However, the practical severity of attacks remains underexplored. For example, how often would such fragment sets occur in the wild, or how adversaries might obtain them reliably.
Supplementary Material: Appendices offer useful details on dataset processing, model setup, and additional domain-specific experiments. The inclusion of both LoRA and full fine-tuning provides robustness insights.
Relation To Broader Scientific Literature: The paper is well-situated in the ML privacy literature, extending work on memorization and membership inference.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
- Thorough experiments across domains and model types.
- Proposed methods are data-blind, making the attack scenario plausible.
Weaknesses:
- Practicality of the threat model is questionable in real-world adversarial scenarios.
- Utility evaluation of fine-tuned models is missing. Do LLMs still generalize after fine-tuning, or are they just memorizing?
- Attacks yield modest absolute TPR values, which may limit their practical threat significance.
Other Comments Or Suggestions: - Consider adding utility evaluations (e.g., GLUE or MMLU) of LLMs before and after fine-tuning to evaluate whether models retain general abilities?
- Clarify the practical steps by which an attacker might construct fragment sets S in open domains.
- Add qualitative examples of successful and failed attacks to aid interpretability.
Questions For Authors: - Have you evaluated model generalization on standard NLP benchmarks pre- and post-finetuning to evaluate levels of overfitting? E.g., GLUE or MMLU.
- How plausible is it for an attacker to reliably obtain the fragment sets you assume (e.g., 10–30 specific keywords)?
- Could the attack work without knowing the exact fragments from the sample, i.e., based on approximate knowledge?
- How sensitive are the results to the Prompt(S) template used?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: > **Concerns about practical impacts, given the “modest” results.**
We appreciate the concern about practical impact. **While a 10% TPR at 2-5% FPR may seem modest, it can still pose a significant privacy threat (e.g., it equates to an attacker being correct 4 out of 5 times), especially scaled to thousands of individuals.** Moreover, membership inference attacks also face challenges in achieving high TPRs, yet are considered to be a significant privacy concern (see e.g., Duan et al. 2024)
> **The practicality and prevalence of the assumed attacker capabilities remains speculative. How plausible is it for an attacker to reliably obtain the fragment sets you assume (e.g., 10–30 specific keywords)?**
We agree that the threat model’s practical impact depends on how feasible it is for an attacker to gather necessary fragments. In domains like healthcare, such fragment data is increasingly accessible. **A 2020 study [Seh et al., 2020] found that healthcare data breaches between 2005 and 2019 affected over 249 million individuals.** Additionally, self-health data collection (e.g., Lupton, 2016) and sharing on social media make such fragments more easily available. Given something as simple as a public record or leaked insurance info, combined with a reliable set of fragments derived from otherwise benign medical data shared on social media, an attacker could construct an effective set S of tokens to enable targeted attacks, as demonstrated by our PIFI threat model. We will elaborate on this in the revised paper.
> **Generalization results on standard NLP benchmarks (MMLU) pre- and post-finetuning**
Thank you for the suggestion. **We ran preliminary utility evaluations (MMLU). The Llama-3-8B model fine-tuned for 10 epochs reached an average MMLU score of 0.565, compared to 0.487 after one epoch and 0.413 for the baseline model.** In medical areas (e.g., clinical knowledge, medical genetics), fine-tuning improved performance over a non-fine-tuned baseline, suggesting the model remains capable and general. Note that further fine-tuned models can outperform generalist models on some benchmarks due to “better behavior” (they naturally follow a stricter format), which DeepEval docs specifically mention as a limitation of MMLU.
| MMLU Category | No Fine-Tune (Zero-Shot) | 1-Epoch Fine-Tune (Zero-Shot) | 10-Epoch Fine-Tune (Zero-Shot) |
|--|--:|--:|--:|
| Clinical Knowledge | 0.41 | 0.57 | 0.63 |
| Professional Medicine | 0.57 | 0.64 | 0.69 |
| Abstract Algebra | 0.27 | 0.15 | 0.25 |
|…|…|…|…|
We will add exhaustive MMLU results to our revised paper.
> **Add qualitative examples of successful and failed attacks to aid interpretability.**
Here are several examples of successful and failed attacks under PIFI from our data. We will update the paper with several of these in accordance with your recommendation.
1. An attack with the fragments “anxiety disorder, arthritis, Morton's neuromas, migraines” infers that an individual has hypothyroidism.
2. An attack with the fragments “shortness of breath, Coumadin, lightheadedness, chest pain, Cardizem, pain, vertigo” infers that an individual has atrial fibrillation.
3. An attack with the fragments “toxicity, breast cancer, Ixempra, tumor, neuropathy, Avastin, cancer, Faslodex, Zometa, ixabepilone” infers that an individual takes Aromasin.
Failed Inferences:
1. An attack with the fragments “cramping, trauma, pain, shock, numbness” fails to infer that an individual takes Naprosyn.
2. An attack with the fragments “diabetes, redness, swelling, itchiness, pain, skin infection” fails to infer that an individual also has cellulitis.
False Positives:
1. An attack with the fragments “myelofibrosis, swelling, diarrhea, hydroxyurea, J A K, steroids, polycythemia vera, lenalidomide, smoking, smoke, pain, numbness” incorrectly infers that an individual takes Prednisone.
2. An attack with the fragments “lung disease, shortness of breath, pneumonia, oxygen” incorrectly infers that an individual has COPD.
> **Could the attack work without knowing the exact fragments from the sample, i.e., based on approximate knowledge?**
Yes, the attack remains effective when the attacker’s knowledge is approximate. **In our ablation studies (see Appendix, Figure 9), we show that replacing 25-75% of true fragments with random unrelated ones leads to only modest performance drops.** This indicates the method is robust even when the adversary’s information is imperfect.
> **How sensitive are the results to the prompt(s) template used?**
We experimented with various prompt templates (e.g., “Consider a patient’s medical summary includes…”; “Suppose a patient’s medical summary includes conditions such”; etc.) and found that attack performance did not change substantially. While format influences raw output probabilities slightly, the overall ranking of candidate fragments (and hence the attack’s effectiveness) remains consistent. We will add a sensitivity analysis in the revised paper. | null | null | null | null |
GLGENN: A Novel Parameter-Light Equivariant Neural Networks Architecture Based on Clifford Geometric Algebras | Accept (poster) | Summary: The paper introduces Generalized Lipschitz Group Equivariant Neural Networks (GLGENN), a neural network architecture based on Clifford geometric algebras (GAs) that is equivariant to pseudo-orthogonal transformations of a vector space with a symmetric bilinear form. GLGENN uses a weight-sharing approach for layers calculating geometric products, resulting in a parameter-light architecture. The paper presents theoretical results on generalized Lipschitz groups, constructs the GLGENN architecture, and evaluates it empirically on toy tasks (regression of a function depending on two vectors and convex hull estimation of 16 points).
## update after rebuttal
Please see comments below.
Claims And Evidence: The paper claims three main contributions:
1. Introduction of Generalized Lipschitz Groups
2. Design and Implementation of GLGENN
3. Superior Performance of GLGENN compared to other equivariant architectures
With regards to claim 1, the paper describes the theory of generalized Lipschitz groups in great detail and the concept seems sound to me. However, I cannot say whether this is truly novel: I believe it is possible (and even somewhat likely) that this is an already known result in mathematics that is just not widely known in the ML community, but I cannot say for sure (I don't know the mathematical literature well enough). I recommend reaching out to mathematicians that specialise in group/ring-theory to check this.
For claim 2, the design and implementation of GLGENN is well-described, and the authors also open-source their code. With regards to the claimed novelty of the work, it is unclear to me in what respects GLGENN goes beyond the already published CGENN (Ruhe et al., 2023). Even the parameter sharing technique seems to have been proposed previously. The differences with respect to published work, in particular, which improvements are made, need to be made more clear.
Finally, the claimed superior performance of GLGENN despite having significantly fewer trainable parameters seems not sufficiently well-supported by the results to me. While GLGENN outperforms MLPs, EMLP-O(5), and EMLP-SO(5) on the regression task, CGENN achieves a noticeably lower error (especially for larger training set sizes). For the convex hull estimation, performance of CGENN and GLGENN seem more similar, but the shown learning curve suggests that CGENN may start outperforming GLGENN for larger training set sizes. I think extended experiments on other (non-toy!) problems are required to support this claim.
Methods And Evaluation Criteria: The proposed methods (GLGENN layers) are well-motivated theoretically and make sense for the investigated problems. They are derived directly from the properties of geometric algebras and the generalized Lipschitz groups. The use of conjugation operations, adapted linear layers, geometric product layers, and normalization layers are all justified theoretically in terms of equivariance.
However, the toy problems investigated in this work are too simplistic to allow a meaningful assessment of the proposed architecture. I recommend that the authors also apply their architecture to real world problems where the use of equivariant models is common (e.g., regression of molecular properties such as energy or forces). This is necessary to demonstrate superiority over existing models, simple toy tasks are not sufficient.
Theoretical Claims: I did not notice any obvious mistakes in the proofs or theoretical claims while reading the paper, but I did not check carefully.
Experimental Designs Or Analyses: The experimental design of the toy problems investigated in this work seems sound to me. However, I would recommend adding additional experiments for real world datasets (see above), and possibly also extending the investigated toy problems (e.g., significantly more points for the convex hull estimation, larger training set sizes, etc.).
Supplementary Material: I only reviewed the Supplementary Material superficially.
Relation To Broader Scientific Literature: The paper clearly positions itself within the existing literature on equivariant neural networks. It primarily builds upon prior work on Clifford Group Equivariant Neural Networks (CGENN) (Ruhe et al., 2023). The paper also cites foundational work on equivariant networks (Cohen & Welling, 2016a; Weiler & Cesa, 2019; Thomas et al., 2018).
As I mentioned before, it is less clear to me in which aspects the work goes beyond existing work, and I think the authors should try to state this more clearly.
Essential References Not Discussed: The paper appears to cite all essential references. I'm not aware of any missing crucial publications. However, as mentioned above, I think it is possible/somewhat likely that the generalized Lipschitz groups are actually already known in the mathematical literature, and I recommend reaching out to mathematicians that specialise in group/ring-theory to check this.
Other Strengths And Weaknesses: **Strengths**
+ The paper is mathematically rigorous, with detailed proofs and justifications provided in the appendices.
+ Despite the complex mathematical concepts, the paper is generally well-written and clearly explains the proposed methods and results.
+ The code is publicly available and the work is therefore easy to reproduce.
**Weaknesses**
- The experimental evaluation is limited to a toy regression and convex hull estimation problems. Evaluation on real world tasks (e.g., molecular property prediction, point cloud classification) and comparison to existing equivariant models on these tasks would strengthen the paper.
- It is not clear in what aspects the paper goes beyond existing work, in particular, CGENN (Ruhe et al., 2023).
Other Comments Or Suggestions: While the visualization shown in Figure 1 is nice, the labels are too small and therefore hard to read.
Questions For Authors: 1. The case of Cl_(0,3,0) is arguably one of the most relevant for real world tasks in e.g., physics and chemistry. It seems that here, the scalars, vectors, bivectors (pseudovectors), and trivectors (pseudoscalars) of the Clifford algebra are isomorphic to irreducible representations of O(3) with degrees 0 and 1 of even and odd parity. Can the authors comment on this connection and do direct comparisons to equivariant models based on irreducible representations of O(3)?
2. The authors present Linear, Geometric Product, and Normalization layers for the Clifford algebra. I am aware that the Geometric Product and Normalization layers introduce nonlinearities, but I still wonder about the possibility of using more typical activation functions. It seems like any nonlinearity could be applied to the scalar feature components, and antisymmetric functions could be applied to the trivector/pseudoscalar feature components, without affecting the equivariance of the resulting model. Have the authors considered this possibility? How does the introduction of nonlinear activation functions affect the performance?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback. We are happy to answer the questions and address the concerns:
**1. Novelty of theory**
We claim that the proposed theory of generalized Lipschitz groups (GLG) in Clifford algebras (CA) $Cl_{p,q,r}$ is new (Sect. 3). These groups are introduced in arbitrary $Cl_{p,q,r}$ for the first time in this work. All theorems in the main part are original and cannot be found in the literature.
**2. How GLGENN goes beyond CGENN**
GLGENN goes beyond CGENN in the following: (1) Parameter-sharing approach for CA-based NN is presented for the first time (see Point 3). (2) New layers with parameter sharing are proposed, theoretical justification is provided (Sect.4). (3) General idea is proposed that if we need orthogonal groups equivariance, then we may search for broader groups equivariance, such as new GLG, and get reasonable results (Theor.3.4 and experiments). (4) New GLG, that are interesting by themselves, are introduced and studied (Sect.3). **We will add** these details.
**3. Novelty of parameter-sharing technique**
To the best of our knowledge, this work is the first to incorporate parameter-sharing techniques in CA-based NN that explicitly respect the inner structures of CA. By our approach, the weights are shared with the step size 4 in dimension of grades subspaces. This step size aligns with reversion and grade involution, the theoretical part of our work explains why this is important.
**4. Limited experiments**
We thank the reviewer for the ideas and would like to make clarifications. (1) While our experiments are toy problems, they are benchmarks in the field and used in highly regarded papers on equivariant NN (e.g., Finzi et al.,2021; Ruhe et al.,2023; Liu et al.,2024). This is why we chose them as a proof of concept. (2) The convex hull task is particularly challenging in high-dimensional settings, as evidenced by the sufficient loss obtained by previous SOTA models (Fig.3). (3) Adaptation of GLGENN (or other CA-based NN) to complex domains such as molecular/protein studies or images/point clouds is a very important but extensive task that probably deserves a distinct dedicated paper (e.g. see Pepe et al. (2024) on CGENN for protein structure prediction). (4) To strengthen our experimental section, **we will add** experiments, see Point 6 of our response to Reviewer xqBf.
**5. Q1: Connection of CAs to irreps of O(3)? Comparison of GLGENN and models based on irreps of O(3)?**
We do not see a direct connection between $Cl_{0,3,0}$ and the irreps of O(3). Irreps of O(3) follow the tensor approach, whereas the CA operates with multivectors. The dimensions of the subspaces of scalars, vectors, bivectors, and trivectors in CA are equal to 1,3,3,1 resp.; in contrast, for rank-2 tensors, the dimensions of the l-irreps of O(3), l=0,1,2 are 1,3,5 resp., so there is no isomorphism between them. E.g., bivectors are isomorphic to antisymmetric tensors of degree 2, which do not correspond to l=2-irrep of O(3) for rank-2 tensors.
While equivariant models based on irreps of O(3) rely on tensor product reps, which can be complex and require additional basis reps, GLGENN bypass the need for them. This leads to 2 benefits: GLGENN directly transform data in a vector basis, avoiding the need for operating on alternative basis reps such as spherical harmonics; GLGENN involve geometrically meaningful product structure through the geometric product.
**6. Q2: Typical activations to scalars?**
Yes, it is possible to apply standard activations to scalars and be equivariant, we explored this in our work. Our results indicate that for simple tasks, the best approach is to combine GLGENN (applied to all grades subspaces) with standard networks (e.g. MLP) with typical activations applied to scalars (0-grade subspace). E.g. in O(5,0)-Regression: (1) MLP (3 linear layers with ReLU) alone performs poorly, (2) GLGENN alone performs reasonably well but converges slower than (3) GLGENN (to all grades)+MLP (to scalars). In case of 300 train samples, we get the results [visualized here](https://drive.google.com/file/d/1VqlvFTx-SqGJo3-OSES6MMVBkuJJi1K5/view?usp=sharing) and presented in the [table](https://drive.google.com/file/d/14nemp06cX7XtjftbKHplxcPCLIk-VHcU/view?usp=sharing)). But in more complex tasks, this combination becomes less critical. In all our convex hull experiments, the best results are achieved with the GLGENN nonlinearities, w/o additional activations. The key issue is that nonlinearities applied only to specific subspaces (e.g. scalars) do not allow interactions between different grades (e.g. vectors and bivectors), isolating them; while the nonlinearities in geometric product layers mix up all grades, creating strong interactions. **We will add** these details.
**7. GLGENN on large train datasets**
We are positive regarding GLGENN scalability to larger train datasets and have extended our experiments. Please see Point 6 of our response to Reviewer RrP5.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their replies and promised additions to the paper. I raise my score to 3 (I am assuming the authors hold their word and will do the promised changes in the final version of the manuscript).
However, there is one aspect of my review I wish the authors would not so easily dismiss: I still believe there is a connection to O(3) irreps (I also consulted with a mathematician colleague of mine, who agreed with me).
> We do not see a direct connection between and the irreps of O(3). Irreps of O(3) follow the tensor approach, whereas the CA operates with multivectors. The dimensions of the subspaces of scalars, vectors, bivectors, and trivectors in CA are equal to 1,3,3,1 resp.; in contrast, for rank-2 tensors, the dimensions of the l-irreps of O(3), l=0,1,2 are 1,3,5 resp., so there is no isomorphism between them. E.g., bivectors are isomorphic to antisymmetric tensors of degree 2, which do not correspond to l=2-irrep of O(3) for rank-2 tensors.
Note that a rank-2 tensor (3x3 matrix) can be written as a direct sum of irreps with l=0 (with even parity), l=1 (with odd parity), and l=2 (with even parity). It is correct that these have dimensions 1, 3, and 5, which matches the 3x3=9 degrees of freedom of a rank-2 tensor. But this is not what I am talking about, it is not about equivalence of CA with rank-2 tensors. In fact, l=2 irreps have no equivalent in CA, but l=0 and l=1 irreps do:
Scalars in CA are equivalent to l=0 irreps with even parity (one dimension, does not change sign under reflections of the coordinate system), vectors are equivalent to l=1 irreps with odd parity (three dimensions, behave rotationally like a vector and change direction under reflections of the coordinate system), bivectors are equivalent to l=1 irreps with even parity (three dimensions, behave rotationally like a vector and do *not* change direction under reflections of the coordinate system), and trivectors are equivalent to l=0 irreps with odd parity (one dimension, *does* change sign under reflections of the coordinate system). The dimensionality matches, the behaviour under rotations/reflections of the coordinate system matches, and even coupling operations seem to match (for example, coupling two l=1 irreps with odd parity to an l=1 irrep with even parity is essentially computed by performing a vector cross product, same as in CA).
Given that there is a wealth of literature on equivariant models based on O(3) irreps, I think it is worth exploring this connection further and try to bridge the two communities. I would really appreciate if the authors added at least a small paragraph to their paper that discusses this.
> While equivariant models based on irreps of O(3) rely on tensor product reps, which can be complex and require additional basis reps, GLGENN bypass the need for them. This leads to 2 benefits: GLGENN directly transform data in a vector basis, avoiding the need for operating on alternative basis reps such as spherical harmonics; GLGENN involve geometrically meaningful product structure through the geometric product.
That statement is wrong. If one is limited to irreps with l=0 and 1 (same as in CA), then l=0 can be represented by (pseudo)scalars and l=1 by ordinary (pseudo)vectors. Nothing is more complex here. The coupling operations you need are not more complex than for CA either. In fact, they are the same (I use + for even and - for odd parity):
* 0+ and 0+ to 0+: multiplication
* 0+ and 0- to 0-: multiplication
* 0+ and 1+ to 1+: scalar multiplication
* 0+ and 1- to 1-: scalar multiplication
* 0- and 1+ to 1-: scalar multiplication
* 0- and 1- to 1+: scalar multiplication
* 1- and 1- to 0+: scalar (dot) product
* 1- and 1+ to 0-: scalar (dot) product
* 1- and 1- to 1+: cross product
* 1- and 1+ to 1-: cross product
* 1+ and 1+ to 1+: cross product
(I hope I didn't miss anything, but I'm sure the general idea comes across).
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for the detailed and insightful comments and for raising the score. We agree that exploring connections between CA and irreps of O(3) could be valuable for bridging communities. We will add a dedicated paragraph to the paper clarifying the correspondence between CA subspaces (k-vectors) and specific O(3) irreps. We are grateful for the reviewer’s rigorous engagement, which will undoubtedly strengthened the paper. | Summary: This paper formally introduces neural networks that are equivariant with respect to generalised Lipschitz groups $\tilde{\Gamma}_{p,q,r}^{\bar{k}}$ for $k = 0, 1, 2, 3$ that are constructed from an arbitrary degenerate or non-degenerate Clifford geometric algebra (GA). The construction is based on two fundamental conjugation operations in the GA, called grade involution and reversion, that create so-called "subspaces of quaternion types" for $k = 0, 1, 2, 3$ which are preserved by the generalised Lipschitz groups under the twisted adjoint representation $\tilde{ad}$. The authors show, in particular, that for $k = 1$, all generalised Lipschitz group equivariant mappings are also equivariant to group that is restricted from a degenerate or non-degenerate orthogonal group to the radical subspace $\Lambda_r^1$. The resulting neural network, termed GLGENN, is compared against CGENN (Ruhe et al. 2023) and other neural networks on two series of tasks, an $O(5,0)$-regression task and a $O(5,0)$- and $O(7,0)$-Convex Hull Volume estimation task. The authors claim that GLGENN achieves a reduced tendency to overfit having significantly fewer parameters than CGENN.
Claims And Evidence: Let me start by saying that this is an absolute monster of a paper in terms of its conceptual difficulty, and to be able to give this review I have taken 3 full days to read/work out as many of the details of the paper as I can properly, in order to try to understand everything that it is saying and to give a fair and informed review to the authors. It seems to me that this paper follows/builds upon a sequence of previous publications (Ruhe et al, 2023; Brehmer et al, 2023; Zhdanov et al, 2024; Liu et al., 2024; Pepe et al. 2024), which I believe makes the entry into this paper very difficult if you have not read these previous works (I have not). I am convinced that for 95%+ of people reading this paper they would give up as soon as they read the Theoretical Background section (Section 2), since it is incredibly slick and technical. Despite this, having read it a few times, I felt that it was comprehensive and with a bit of work from my side I could understand what it was saying. I appreciate that the paper is already long (32 pages including Appendix) but I think to aid the reader even more I would like the authors to provide a running example (e.g on the Minkowski space $\mathbb{R}^{1,3}$) of their constructions that appear in this section in the Appendix. From there we get into the meat of the paper in Section 3. Again this is very slick, which I understand based on the 8 page limit, but it is apparent to me that in order to even begin to understand what is going on you really need to read the Appendix first. I can't help feeling that this paper, given its technical depth, would have been better served by being written as a Journal paper instead, since in my view the logical flow would be more apparent in a journal paper, without having to make jumps between theorems (as I do in its present form). In spite of this, everything that I could understand in this section (about 80% of the section, let's say) was to me correct (and that is going line-by-line through it and the Appendix). So in my mind, on the basis of probability, I do not have doubts about the theoretical claims that were made by the authors. I would still strongly encourage them to think even more about how to present this work more cleanly - I appreciate that they have done this to a large extent already (e.g with take-away summaries after each subsection, a notation overview in the appendix etc) but there is a lot going on (many groups and algebras are introduced, some with tildas, some without, some with bars, some without) and I think at times some notation is used without it being defined anywhere. For example, I don't think, as far as I can tell, that $(Cl_{p,q,r}^{(0)\times} \cup Cl_{p,q,r}^{(1)\times})\Lambda_r^{\times}$ is defined anywhere in the text. I would also like the authors to be consistent with whether they use $U$ or $x$ to refer to an element in $Cl_{p,q,r}$: there is a switch from $U$ to $x$ that takes place between Sections 2 and 3 that could confuse people.
Methods And Evaluation Criteria: For me, the methodology section (Section 4) was relatively clear, where the authors described the architecture of GLGENN. I did think the commentary about the data objects in 4.1, however, was not entirely clear - where do these data objects ($x_1, \dots, x_l$) live? I think this could be stated directly in the text. Otherwise I was happy with what the authors wrote in this section.
Theoretical Claims: I have already answered this above. I have checked 80% of this line by line, and where it is simply beyond my technical understanding, I have had to leave it unchecked. But I am happy with everything that I have checked.
Experimental Designs Or Analyses: I was quite disappointed by the experiments that the authors chose, and it led me to thinking whether the theory that had been provided before was "overkill". Having produced their exceptionally technical theoretical results, the authors chose to demonstrate their results on tasks that are solely based on Euclidean space, with the equivariance being to a standard orthogonal group on said Euclidean space. Hence I am left wondering how their network performs on tasks that are equivariant to groups where $q > 0$. I am also not entirely convinced by the conclusions of their experiments either. In both cases the authors claim that GLGENN performs on a par with the state-of-the-art CGENN and reduces overfitting with fewer parameters. I am not convinced that GLGENN performs on a par with CGENN. Looking at Figure 2 with Table 2, and then Figure 3 and Table 3, I see that GLGENN starts performing worse than CGENN as the number of training samples increases, and, in particular, in a divergent manner in Table 3. I therefore ask what is the behaviour of the test MSE of these two models as the number of training samples increases?
Supplementary Material: I reviewed all parts to the best of my ability. Some of the proofs in Sections E and G got away from me, but I am predominantly happy with what I read from a technical standpoint.
Relation To Broader Scientific Literature: The relation to the broader scientific literature was comprehensive, with a very nice introduction that set the work in the appropriate context.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Without repeating myself:
Strengths: This is clearly a high quality technical work in the field of equivariance that demands a significant amount of the reader to understand the details. I felt that I learned a lot and in many parts enjoyed "grokking" the details. I also liked the mini-summaries as the work progressed, which helped me to take away the key points when I got entirely lost in the technical details (this is something I will be taking and using in my own papers!) I thought the Appendix was pretty comprehensive, and the discussion on the methodology was good too. Despite my strong doubts about whether this should be a journal paper or not, I still think there will some people at ICML who will appreciate a technical work like this, so on balance (with my score) I recommend it (only just) for publication. However I would like the authors to take on board my comments about improving the paper's accessibility - even if it is a work in a line of works there should still be sufficient runway given to make it into enough of a standalone paper.
Weaknesses: As stated above I was disappointed by the types of experiments that appeared, which to me did not show the full benefit of the theoretical construction. I also think more can be done to help facilitate the entry into this paper - whilst I appreciate the contribution is technical, the onus is still on the authors to help make things as clear as they can. I have suggested providing a running example of all key constructions (not just a summary of their meaning) in the Appendix. I would also like the message of what the authors are actually trying to show to be clearer too: for example, in the main text, lines 165-170 (2nd column) the authors say that they are looking to construct equivariant NNs with respect to $\tilde{\Gamma}_{p,q,r}^{\bar{k}}$ for $k = 1$, but in Appendix B, lines 641-645, they say the NNs are equivariant to general $k$ - which one is it? I appreciate this is tough given the technical nature of the paper but again I really want the story to shine through (this doesn't always stick as you read it despite the mini summaries) - perhaps they could look to "give up" more of the story earlier on so that we know where you're going? An example of this appeared in Section 3.1 where the authors just suddenly introduce large families of groups (lines 184-207, second column) without telling us why/where it's leading - as readers we're already under the strain as it is!
Other Comments Or Suggestions: Here are some comments/suggestions/typos:
1) I think you should define $(Cl_{p,q,r}^{(0)\times} \cup Cl_{p,q,r}^{(1)\times})\Lambda_r^{\times}$ somewhere before it is first used in Lemma 2.1.
2) As discussed, I think you should prime the reader as to why you introduce the $Q$ families of Lie groups in lines 185 etc, second column, as well as the $Z$ centralisers (as in tell us where you're going with this much earlier than the concluding sentence.)
List of (potential) typos:
1. Line 320, 1st column: equivariant is spelled incorrectly
2. Line 341, 1st column: you write $Cl$, do you not mean $Cl_{p,q,r}$?
3. Line 362, second column: I think this should be Theorem 3.9, not Lemma 3.9
4. Appendix F: in the title, equivariant is spelled incorrectly
5. Appendix F, Example F1: do you not mean $T$ instead of $g$ in both (137), (138)?
Questions For Authors: 1) Line 134, 1st column: do you really mean the word "superposition"? To me, superposition means add/linear combination, but here I think you mean the combination/composition of the two operations, one after the other.
2) I think (10) could be formatted better (similarly (16), (17)). Also for (10) I don't think that the $:=$ that comes after $\tilde{ad}_T(...)$ is correct, isn't it a deduction from (7)? Upon first reading I had wondered if you had meant $\breve{ad}_T(...)$ instead.
3) In line 235, second column, in the discussion, there is a switch to $V$ - just to check, is this is the same as $\Lambda_r^1$? If so, I think you should be telling us otherwise it looks like a typo.
4) (a copy from above) in the main text, lines 165-170 (2nd column) the authors say that they are looking to construct equivariant NNs with respect to $\tilde{\Gamma}_{p,q,r}^{\bar{k}}$ for $k = 1$, but in Appendix B, lines 641-645, they say the NNs are equivariant to general $k$ - which one is it? If it is only $k = 1$, then I think in lines 165-170 they should tell us as they introduce everything for general $k$ and to the reader at this point it looks like a mistake (hence could be confusing as it goes along).
5) In (36) do you not need brackets in there around $x$ on the RHS and around $\langle x \rangle_{\bar{m}}$ on the LHS?
6) (a copy from above) in both of your experiments, what is the behaviour of the test MSE of these two models as the number of training samples increases? I am not entirely convinced of the "on par" performance of the two models GLGENN and CGENN.
7) To improve readability, is there any way to reduce the general complexity that is presented if you are only targeting equivariance for $k = 1$?
8) Finally I felt that Figure 1 did nothing for me - I still don't understand what I am meant to be taking from it, even after reading the text in full multiple times. Could the authors try to explain what I should be taking from this figure (and then perhaps improve it/move it further down somewhere where it might fit better).
## update after rebuttal: please see my comments in my response to the authors' rebuttal.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback. We are happy to answer the questions and address the concerns as follows:
**1. Q1: Superposition in Clifford conjugation?**
Thank you. We will change the word to ‘composition’.
**2. Q2: Formatting in (10),(16),(17)?**
Thank you. We will format definitions (10),(16),(17) so that they fit on a single line each, remove := in them, and add := in (5)-(7).
**3. Q3: Restriction of $\tilde ad$ to V?**
No, this is not a typo, we intentionally refer to V, which is the real $\mathbb R^{p,q,r}$ or complex $\mathbb C^{p+q,0,r}$ vector space (line 98). V is identified with the subspace $Cl^1_{p,q,r}$ of vectors. In line 235, we mean that $\tilde{ad}^1$ is defined as the twisted adjoint representation $\tilde ad$ (7) restricted to the entire subspace $V=Cl^1_{p,q,r}$, not just to the radical subspace $\Lambda^1_r=Cl^1_{0,0,r}$. Namely, while $\tilde{ad_T}$ acts on $Cl_{p,q,r}$, the restricted representation $\tilde{ad}^1_T$ acts only on all vectors $Cl^1_{p,q,r}$ (line 237). We will add this explanation to avoid any confusion.
**4. Q4: Equivariance to which of new groups is used in GLGENN?**
Thank you. GLGENN is equivariant w.r.t. $\tilde\Gamma_{p,q,r}^{\bar1}$. We will remove k=0,2,3 in lines 641-645 and clarify in the beginning of Sect.3, that for GLGENN, we are interested in $\tilde\Gamma_{p,q,r}^{\bar1}$, however other groups $\tilde\Gamma_{p,q,r}^{\bar k}$, k=0,2,3, are still necessary to prove the main statements about $\tilde\Gamma_{p,q,r}^{\bar1}$ (see Q7).
**5. Q5: Brackets in $\tilde{ad}_T(x)$?**
In the literature, both $\tilde{ad}_T(x)$ or $\tilde{ad}_T x$ are commonly used. We will put brackets in the whole paper for the sake of accuracy.
**6. Q6: Behavior of test MSE when training set size increases?**
The test MSE as the training set size increases is as [follows](https://drive.google.com/file/d/1llsG1wPRNyQ4IIicZNt4Wpyxn4IJn7oZ/view?usp=sharing). We will add these results. In regression, GLGENN consistently outperforms CGENN across all training set sizes. In the convex hull task, the performance gap becomes stable, and GLGENN still is competitive. Again note that in this task, GLGENN and CGENN have 24.1K and 58.8K parameters respectively. We are positive regarding GLGENN scaling. But as we state, our goal is to reduce the risk of overfitting in case of small training data — a common scenario in natural sciences, where datasets are often manually derived from experiments.
**7. Q7: Role of $\tilde\Gamma_{p,q,r}^{\bar k}$, k=0,2,3?**
We share the reviewer's concern about reducing general complexity to improve readability. While equivariance w.r.t. $\tilde\Gamma_{p,q,r}^{\bar1}$ is directly required for GLGENN, the other 3 groups play a crucial role in proving a key fact about this group: its elements preserve all the 4 subspaces $Cl_{p,q,r}^{\bar k}$, k=0,1,2,3 under $\tilde ad$ (see (28)-(29)). This implies that projections onto these 4 subspaces are $\tilde\Gamma_{p,q,r}^{\bar1}$-equivariant (Theorem 3.6). We appreciate and will implement the idea of making it clearer at the beginning of Sect.3 that $\tilde\Gamma_{p,q,r}^{\bar1}$ is the primary group of interest, while the others are auxiliary tools.
**8. Q8: Figure of GLGENN?**
Figure 1 does not introduce new information but serves as a visual representation of concepts already described in the text. We intended the following key points to be intuitively grasped by readers when looking at it: (1) GLGENN is equivariant w.r.t. orthogonal groups (we can apply an orthogonal transformation to the input or output of GLGENN and are guaranteed to get the same answer); (2) GLGENN acts in a unified manner across the subspaces $Cl_{p,q,r}^{\bar k}$, k=0,1,2,3 defined by the grade involution $\hat{}$ and reversion $\tilde{}$. As a result, scalars, vectors, bivectors, etc. are formed into 4 distinct groups. Point (1) is visualized by the orange and purple arrows and the 4 main large rectangles forming a diamond shape. Point (2) is represented by the 4 smaller rectangles (yellow, green, purple, and orange) and +/- signs indicating the actions of grade involution and reversion on these groups.
**9. Notation**
The notation $(Cl_{p,q,r}^{(0)\times}\cup Cl_{p,q,r}^{(1)\times})\Lambda_r^{\times}=${$ab | a\in Cl_{p,q,r}^{(0)\times}\cup Cl_{p,q,r}^{(1)\times}, b\in\Lambda_r^{\times}$} represents the product of 2 groups: the group of even or odd invertible elements (formula (3)) and the group of all invertible elements of the Grassmann subalgebra (line 104). We will add these clarifications and ensure that all other notations are properly defined.
**10. Typos**
We agree with all the corrected typos, thank you.
**11. Improvement of presentation**
We are very grateful to the reviewer for the comments on how to improve the presentation. We like and will surely implement the ideas of providing a running example of the main concepts and revealing more of the key ideas earlier in the paper.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response. I am satisfied with their comments and am glad to see the results of the additional experiments that were run - although I maintain my score I am of the view that this paper should be recommended for publication.
---
Reply to Comment 1.1.1:
Comment: Thank you for your kind feedback and for taking the time to review our work. Your constructive feedback is very valuable for strengthening the paper, and we’re grateful for your recommendation for publication. | Summary: This paper introduces a new version of Clifford Group Equivariant Neural Networks (CGENN) originally introduced by Ruhe et al., 2023 called GLGENN. The authors develop a theory for generalized Lipschitz groups which generalize and contain Clifford groups. Generalized Lipschitz groups preserve a subspace decomposition corresponding to involution and reversion operations on the Clifford algebra. If I understand correctly, enforce equivariance to this larger group using a weight sharing scheme with respect to this coarser subspace decomposition as opposed to the finer grading results in fewer trainable parameters relative to CGENN. Since the group is larger, there is still equivariance to the clifford group and thus to the orthogonal group. The experiments show that there is no loss in performance despite the fewer parameters and the overconstraint.
Claims And Evidence: - The paper notes that CGENNs are overparameterized and tend to overfit and have slow training times motivating the need for a fewer parameter version. I didn’t specifically see evidence of overfitting in the experiments. The slow training time for CGENN is certainly a concern, but there is not evidence in the main the GLGENNs remedy this.
- GLGENNs are shown to have comparable performance to CGENNs even with fewer parameters.
- Whether the larger group constraint poses a problem for expressiveness is not conclusively shown. In the 2 synthetic tasks considered it did not.
- The equivariance of GLGENNs is firmly established by an extensive theory section.
Methods And Evaluation Criteria: - The method is a reasonable evolution of CGENN as described in the summary above. The authors provide analogs of the layers provided by Ruhe et al, 2023. Except for the Conjugation operation, the novelty appears to be limited to generalization to the new group and grading.
- The evaluations are similar to those in prior work, but are small scale and synthetic. As noted above, the question of overfitting and computation speed do not appear to be directly addressed.
Theoretical Claims: - A definitive strength of the paper. The mathematics of clifford geometric algebras, and generalized Lipschitz groups is very clearly laid out both in the background and new contributions.
- The authors prove several results which firmly underlie their method: (1) generalized lipschitz groups contain the clifford groups and thus equivariance wrt gen. lipschitz groups implies clifford group equivariance (2) the subspace decomposition is preserved by gen. lipschitz groups, and (3) gen lipschitz equivariance implies orthogonal group equivariance, (4) several different operations are equivariant wrt gen. lipschitz groups, providing the basis for the layers in CGENN.
- The theory section could use a bit more scaffolding/prefacing to help connect the results to their eventual use in constructing CGENNs. I had to read forward and backward a bit to understand.
Experimental Designs Or Analyses: - As noted above, the experiments do demonstrate a smaller GLGENN performs comparably to CGENN, but it would be good to also show that a CGENN the same size as the GLGENN underperforms the GLGENN. If not, it undermines them main claims.
- The experiments are very small scale and very synthetic. The practical usefulness of the method is not demonstrated. My guess is that it will scale, however, given how similar it is to CGENN which has ben demonstrated in practical applications.
- While GLGENN has fewer parameters, memory footprint and wall clock compute should also be compared.
Supplementary Material: I have skimmed the supplement. It is quite complete, containing full proofs, additional theory, and experimental details. Code is also included.
Relation To Broader Scientific Literature: There is no related work section, but related work is adequately addressed in the introduction and throughout the paper in my opinion.
Essential References Not Discussed: No
Other Strengths And Weaknesses: n/a
Other Comments Or Suggestions: 320L: Equivariant misspelled
Questions For Authors: - Can the model accommodate higher order features such as tensors?
- Since the parameters of the conjugation operations are discrete, how are they optimized? Are they used in experiments? It would be good to comment in the paper on this.
- The linear layers cannot mix information among \bar{k}, correct?
- Are geometric products and normalization the main source of non-linearity and mixing \bar{k}?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback. We are happy to answer the questions and address the concerns as follows:
**1. Q1: Higher order features such as tensors?**
Yes, the model can accommodate higher-order features, such as tensors. For example, it can be applied to the N-body problem, where the goal is to predict the positions of N charged particles in an n-dimensional space after a certain number of timesteps, given their initial positions and velocities. In this task, tensors represent the N initial positions and velocities (each with n coordinates). The model can process such tensors when graph neural networks are used to define the structure of connections between objects.
**2. Q2: Conjugation operations layers?**
Thank you, this is a good question. We suggest the following method to make the optimization of conjugation operations possible, though other potentially more effective approaches may be suggested. We first apply a linear transformation of the projections of the input multivector $x_{in}$ as in (42) but with any parameters $\phi_{c_{in}k}\in\mathbb{R}$. Then we can round them to -1 or 1 either by directly applying $\phi_{c_{in}k}\mapsto sgn(\phi_{c_{in}k})$ or by firstly applying the sigmoid function, then scaling the value to [-1,1], and applying sign function to the result. In the experiments presented, these layers are not used in order to maintain the architecture choices of GLGENN close to the corresponding of CGENN as much as possible to ensure fair competition. We mention this in lines 1558-1560 of Appendix F and will add a comment on it in the main part of the paper.
**3. Q3: Linear layers cannot mix information among $Cl^{\overline{k}}_{p,q,r}$?**
Yes, you are correct. The linear layers do not mix information among the subspaces $Cl^{\overline{k}}_{p,q,r}$, $k=0,1,2,3$. Instead, they perform linear transformations independently within each of these four subspaces. A visual representation of how a linear layer operates can be found at the following [link](https://drive.google.com/file/d/1ev594pDgVjKuMfXY5rCB4eZPK4W2jEju/view?usp=sharing).
**4. Q4: Geometric products and normalization are the main source of non-linearity and mixing $Cl^{\overline{k}}_{p,q,r}$?**
Yes, you are correct. GLGENN introduce non-linearity through the geometric product and normalization layers, where the subspaces $Cl^{\overline{k}}_{p,q,r}$, $k=0,1,2,3$, are mixed.
**5. Overfitting of CGENN.**
Our experimental results (see, for example, Figure 3) align with the results in CGENN paper Ruhe et al., 2023 (see Figure 3 (right) and discussion of the convex hull experiment on page 9 of their work), where they also note the tendency of CGENN to overfit in scenarios with small training datasets.
**6. Other experimental designs.**
We thank the reviewer for the idea on how to improve the work. We are positive regarding scaling of GLGENN as well. To strengthen our experimental section, **we will add** the experiments done in prior works: (1) real-world task in $Cl_{1,3}$ of categorizing high-energy jets produced in particle collisions by their trajectories using the data from CERN’s ATLAS detector (this task is considered in CGENN paper by Ruhe et al., 2023), (2) classic benchmark N-body experiment (the task is considered in Ruhe et al., 2023 and GATr paper by Brehmer et al., 2023), and (3) harder settings for the convex hull and regression experiments, in particular, comparing CGENN of the same size as GLGENN. We will also include a comparison of memory footprint and wall clock time between GLGENN and CGENN. We would like to note that while the current experiments are toy problems, they are considered benchmark tasks in the field and have been used in highly regarded papers on equivariant neural networks (e.g., Finzi et al., 2021; Ruhe et al., 2023; Liu et al., 2024). This is why we chose them as a proof of concept, and GLGENN successfully validated our hypothesis that structured weight sharing improves performance.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors response to my questions. Regarding Q1, I believe the positions and velocities would just be vectors i.e. 1-tensors. My question was about things like matrices (i.e. 2-tensors) where the group acts by conjugation. If the authors do add additional experiments and the model performs well, I believe it would strengthen the paper. I hold, however, that the paper should be accepted even without these larger scale experiments.
---
Reply to Comment 1.1.1:
Comment: We are grateful for your high evaluation of the paper. Thank you for your thoughtful feedback, time, and recommendation for publication. | Summary: This paper introduces a novel group equivariant architecture based on Clifford geometric algebra that are equivariant to pseudo orthogonal transformation and are more parameter efficient and avoids overfitting compared to prior works in equivariant networks based on Clifford algebra. The design is based on first designing a generalized Lipschitz group and then designing an equivariant network for this generalized Lipschitz group that leads to more parameter efficient networks. Experimental results on two synthetic datasets show competitive performance with respect to existing Clifford algebra equivariant networks.
Claims And Evidence: While the theoretical contribution seems straightforward, the practical motivation is not very clear to me: how is equivariance to the generalized group leading to more parameter efficiency? Is it straightforward, please provide better explanation. Also, in the experiments, shouldn’t using a more general Lipschitz group lead to better performance? Why is the performance slightly worse than CGENN, i.e., is there a reason why the expressivity of the proposed method is limited compared to prior work?
Methods And Evaluation Criteria: The amount of experiments seem very limited compared to both the motivation of the work as well as prior works such as Geometric Algebra Transformers (GATr). Is it possible to provide more experiments done in prior works?
Theoretical Claims: This looks good to me
Experimental Designs Or Analyses: Soundness/validity of experiments look good.
Supplementary Material: I looked through the supplementary material wherver required
Relation To Broader Scientific Literature: This paper makes progress in the area of Clifford Algebra equivariant networks, which is an established direction in the area of efficient machine learning.
Essential References Not Discussed: Looks good to me
Other Strengths And Weaknesses: Strengths:
1. This work proposes a novel architecture based on Clifford geometric algebra for a new class of Lie groups
2. The proposed equivariant model construction technique is light weight in terms of number of parameters compared to prior Clifford algebra networks
3. Experimental results show that the proposed method is competitive with existing methods as well as light-weight in terms of parameters
Weaknesses:
1. While the theoretical contribution seems straightforward, the practical motivation is not very clear to me: how is equivariance to the generalized group leading to more parameter efficiency? Is it straightforward, please provide better explanation. Also, in the experiments, shouldn’t using a more general Lipschitz group lead to better performance? Why is the performance slightly worse than CGENN, i.e., is there a reason why the expressivity of the proposed method is limited compared to prior work?
2. The amount of experiments seem very limited compared to both the motivation of the work as well as prior works such as Geometric Algebra Transformers (GATr). Is it possible to provide more experiments done in prior works?
3. What are the number of parameters in Table 2?
Other Comments Or Suggestions: None
Questions For Authors: Please check the weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback. We are happy to answer the questions and address the concerns as follows:
**1. Q1: Relation between generalized Lipschitz groups equivariance and parameter efficiency?**
Application of the generalized Lipschitz groups (introduced in this work) instead of ordinary Lipschitz groups allows to achieve parameter efficiency, we explain it below. The generalized Lipschitz groups are important because they preserve the four fundamental subspaces of Clifford algebras under the significant operation of the twisted adjoint representation. We prove that equivariance of a mapping w.r.t. these groups, as well as the ordinary Lipschitz groups, implies its orthogonal groups equivariance. The key distinction is that the generalized Lipschitz groups contain ordinary Lipschitz groups as subgroups. As a result, the set of operations equivariant w.r.t. the generalized Lipschitz groups is a subset of the set of operations equivariant w.r.t. ordinary Lipschitz groups. This reduction in the number of ‘degrees of freedom’ encourages us to parametrize operations in layers in a more ‘economic’ way (there is a smaller number of parameters that we can place). Specifically, in all GLGENN layers, we employ such equivariant operations as projections of inputs-multivectors onto the 4 fundamental subspaces of Clifford algebras mentioned above, whereas CGENN relies on projections onto the subspaces of fixed grades. GLGENN and CGENN layers parameterize linear combinations, products, and normalizations of the corresponding projections. The number of the fixed-grades subspaces is equal to $n+1$, where $n$ is the dimension of the task’s vector space. As $n$ increases, the number of CGENN parameters grows significantly, while for GLGENN, it remains constant. To summarize, the key reason why GLGENN layers are parameter-efficient lies in how we parametrize the operations in them: using the projections onto the 4 fundamental subspaces, instead of $n+1$ subspaces as in CGENN.
The growth in the number of parameters for similar layers in GLGENN vs. CGENN as $n$ increases can be estimated by the formulas in our paper. GLGENN geometric product layer contains $4k^2+4^3k$ parameters for $k$ input channels (Sect. 4.4), while CGENN geometric product layer has $(n+1)k^2+(n+1)^3k$ parameters. Similar formulas for other GLGENN layers are in Sect. 4.3 and 4.5. The formulas for CGENN can be obtained by replacing 4 by $n+1$.
We thank the reviewer for this good question for presentation improvement, **we will add** these more detailed explanations.
**2. Q1 (part 2): Better performance of GLGENN in experiments?**
One of the goals of our work is to design a model that performs better than CGENN in case of small training datasets, which is a common scenario in the natural sciences, where datasets are often manually derived from experimental results. GLGENN is designed to mitigate overfitting in such cases, while its expressivity does not necessarily provide an advantage when data is abundant. Specifically, GLGENN significantly outperforms CGENN in both convex hull experiments, particularly when the training set is small (e.g., $2^{8}$ or $2^{10}$ samples in $n=5$ case, Fig. 3). When the training set size is large for the task ($\geq 2^{12}$ samples), GLGENN and CGENN are on a par (with the difference <0.4 MSE, which is, in our opinion, almost insignificant in comparison to the difference in parameters quantity – 24.1K for GLGENN vs. 58.8K for CGENN), the similar behaviour remains in the training set size $>2^{14}$ samples. In the regression task, GLGENN again achieves lower MSE than CGENN when trained on a small dataset (300 samples, 0.002 vs. 0.0089). With an extremely small training dataset (30 samples), GLGENN is on a par with all other models, except MLP and MLP+Aug, likely due to insufficient data for meaningful generalization. When the dataset is large ($3\cdot10^3$ or $3\cdot10^4$ samples), CGENN slightly outperforms GLGENN, but the difference diminishes as data grows.
**3. Q2: More experiments done in prior works?**
Please refer to Point 6 in our response to Reviewer xqBf, where we provide details on which experiments **we will add**.
**4. Q3: Number of parameters?**
The number of parameters in Table 2 (O(5,0) Regression Experiment) is as follows. All the models besides GLGENN and CGENN have ~150.3K parameters in total. In CGENN and GLGENN, the architecture contains sequentially applied Clifford algebra-based layers (applied to the subspaces of all grades) and ordinary MLP layers (applied only to the subspace of 0-grade (scalars)). The most significant layers and at the same time the most consuming for training are Clifford algebra-based ones, and CGENN possess ~1.8K of parameters associated with such layers, while GLGENN has ~0.6K of such parameters. For the MLP part, which is very fast and easy for training, both CGENN and GLGENN both have ~148.5K. **We will add** this information. | null | null | null | null | null | null |
Structured Preconditioners in Adaptive Optimization: A Unified Analysis | Accept (poster) | Summary: This paper presents a unified analysis of adaptive optimization algorithms with structured preconditioners, challenging the assumption that better approximations of full-matrix Adagrad or less structured preconditioners always yield superior performance. The authors introduce "well-structured preconditioners" enabling a general regret bound, revealing a trade-off between domain metric and adaptive gradient norm. They demonstrate that a one-sided Shampoo variant achieves improved regret bounds and outperforms full-matrix Adagrad theoretically and empirically, suggesting simpler, more structured preconditioners can be more effective. The work provides a new perspective on adaptive optimizer design and insights for efficient large-scale training.
Claims And Evidence: Most of the claims are well-evidenced, except for the claim that "Conceptually, our findings challenge the conventional wisdom that using a larger set of preconditioners which require more memory and compute leads to better optimization performance in terms of number of steps.". This is because of two major reasons: (a) The analysis of one-sided Shampoo focuses heavily on a particular type of loss function (Equation 7) and a simplified experimental setting. It's unclear if these findings generalize to other types of loss functions or more complex problems. (b) The experimental setup is highly specific and doesn't reflect the complexities of typical deep learning scenarios.
Methods And Evaluation Criteria: The proposed methods have theoretical merit, but the evaluation criteria are not sufficiently comprehensive to support strong claims about the practical benefits of one-sided Shampoo in real-world deep learning scenarios. More extensive and realistic experiments are needed.
Theoretical Claims: I haven't conducted a detailed verification of the proofs. However, the theoretical claims appear to be generally sensible and don't immediately contradict known principles in adaptive optimization. A rigorous validation of the proofs would be necessary for complete confirmation.
Experimental Designs Or Analyses: While the experimental design appears logically sound for the specific problem under consideration (quadratic optimization), the lack of realism raises concerns about the validity of extrapolating these results to more complex, real-world scenarios. The absence of experiments on standard deep learning datasets and models is a significant limitation.
Supplementary Material: My review of the supplementary material was limited to a high-level overview of the proofs. Due to time constraints, I was unable to perform a full and complete verification of all the mathematical details.
Relation To Broader Scientific Literature: The paper's key contributions relate to the broader literature in the following ways:
- Adaptive Optimization: Builds upon and refines existing adaptive optimization methods like AdaGrad, Adam, and Shampoo (Duchi et al., 2011; Kingma & Ba, 2014; Gupta et al., 2, particularly Gupta et al. (2017), but overcomes limitations by introducing the concept of "well-structured preconditioners."
- Shampoo Optimizer: Contributes to the understanding and improvement of the Shampoo optimizer (Gupta et al., 2018; Anil et al., 2020) by proposing and analyzing a one-sided variant with improved theoretical properties.
Essential References Not Discussed: As far as I know, no other reference needs to be discussed.
Other Strengths And Weaknesses: # Strengths
- The theoretical analysis is generally well-presented and easy to follow (assuming sufficient background knowledge).
# Weaknesses
- The lack of experiments on standard deep learning benchmarks limits the practical significance of the findings.
- The motivation for focusing on the specific quadratic problem in the experiments could be strengthened.
Other Comments Or Suggestions: While Section 3.3 demonstrates that the proposed well-structured preconditioner framework encompasses a range of existing adaptive optimizers, the paper misses an opportunity to provide comparative insights within this unified framework. A more detailed analysis of the relative strengths and weaknesses of different optimizers, as revealed by the framework itself, would have further strengthened the contribution.
Questions For Authors: Could you elaborate on the intuition behind the development of the well-structured preconditioner sets? What key insights led you to believe that these specific algebraic properties (closure under scalar multiplication, addition, and multiplication) would be sufficient to address the limitations in previous analyses?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging the theoretical insight of our work. We will address your concerns below.
**A larger set of preconditioner is not necessarily better:** We appreciate the reviewer’s concern regarding our claim. Due to computational limitations, it is indeed difficult to compare full-matrix AdaGrad with those structured preconditioners on real-world tasks. However, our intention is not to assert that simpler preconditioners always perform better, but rather to challenge the assumption that **a** **larger set is always superior**. We successfully show it is wrong via the theoretical analysis and synthetic task.
**Practical advantage of one-sided Shampoo:** Thanks for the clarification. We realize that our paper might have unintentionally suggested that one-sided Shampoo has practical benefits on real-world tasks, which we do not claim. Our work is mainly theoretical, and our experiments are limited to a simplified setting that aligns with our analysis. We agree that demonstrating practical benefits would require larger-scale experiments, which is beyond the scope of this submission. We leave this for future work and welcome pointers to such work if it already exists. However, we want to clarify that the theoretical analysis (Theorem 3.3 and 3.7) holds for any convex functions so the analysis of one-sided Shampoo is not restricted to a specific type of function (equation 7).
**Motivation for specific quadratic problem:** We propose a simple quadratic function so that we can easily compute those convergence rates in table 2 and verify the effectiveness of our theory. We agree it seems unclear why to choose such specific quadratic problem so we decide to add a new quadratic loss function on which the empirical results are consistent. We only introduce the loss function below and more details can be found in our response to reviewer N1Ca.
We consider a synthetic linear regression problem $||\mathbf{A} \mathbf{X} - \mathbf{y}||\_2^2$ where $\mathbf{A}$ is the data matrix and $\mathbf{y} =\mathbf{A} \mathbf{X}^*$ is the label vector generated by ground-truth $\mathbf{X}^*$. Thus, the loss function can be equivalently written as $f(\mathbf{X}) = \langle \mathbf{H}, (\mathbf{X}-\mathbf{X}^*)(\mathbf{X}-\mathbf{X}^*)^\top \rangle$ for $\mathbf{H} = \mathbf{A}^\top \mathbf{A}$, which is the same function that we studied in section 4.3 and show that 1-sided shampoo outperforms other adaptive algorithms.
**Detailed comparison between optimizers:** We would like to elaborate on the comparative insights our theory can offer. As mentioned in line 74-80, line 222-231 and line 430-434, we should really consider the tradeoff between different terms in the regret bound or convergence rate. A less structured preconditioner such as full-matrix AdaGrad may have strength in having smaller $|||\mathbf{g}\_{1:t}|||\_\mathcal{H}$ and its weakness lies in inducing larger $||\mathcal{X}||_\mathcal{H}$.
**Property of subalgebra:** The conditions for the well-structured $\mathcal{H}$ are required to ensure that $\mathbf{H}\_t\succeq \mathbf{H}\_{t-1}$. In particular, the condition for being closed under scalar multiplication and matrix addition (i.e., $\mathcal{K}$ is a linear subspace) and the condition for being closed under matrix multiplication are important in the proof of the desired property. Indeed, without the linear subspace condition, we have the failure example of two-sided Shampoo as discussed in Example 2.2 and Appendix A.3.1; moreover, without the matrix multiplication condition, we have the failure example of tridiagonal matrices as discussed in Example 2.3 and Appendix A.3.2. Also, as can be seen from the proofs in Appendix A.2, for any $\mathbf{H}\in\mathcal{H}$, we require $p(\mathbf{H}) \in \mathcal{H}$ for any polynomial $p$, which guarantees especially that $\mathbf{H}^{-1}\in\mathcal{H}$. All these requirements naturally inspire us to formulate the well-structured preconditioner sets using the notion of matrix subalgebra. | Summary: The paper studies preconditioned adaptive methods for online convex optimization in a unified manner. They define a particular class of “well-structured” preconditioners that recovers all 3 variants of the AdaGrad and the Shampoo algorithm as special instances. The main take away message of the paper is that by selecting the set of preconditioners in some “well-structured” sense, the metric for the norm of the sum of gradient outer products and the distance to solution could be determined in a principled sense for achieving better regret bounds (under certain scenarios).
Claims And Evidence: The main claim is that more structured preconditioners, i.e., the set of preconditioners which are relatively small (e.g., AdaGrad with scalar steo size or diagonalmatric preconditioner) could outperform less structured (e.g., AdaGrad with full-matrix preconditioner) versions. Their claim is supported for the Shampoo algorithm; they propose a new variant called one-sided Shampoo which essentially keeps track of only the “left-matrix”. They prove a regret bound which is better than the original shampoo by improving the dependence on the norm of the optimal solution (Frobenius to Spectral).
This results shows that a more principled idea of preconditioner design might help design new adaptive algorithm.
However, it is not clear how would such a design process work. For instance, the authors claim that regret bound for the full-matrix AdaGrad is worse than AdaGrad-norm or AdaGrad with diagonal preconditioner. However, there is no rigorous axplanation as to how this is the case. In fact, comparing the regret bounds from the original paper for the composite mirror descent update version, full-matrix AdaGrad had better regret; (i) it is robust against rotations unlike diagonal preconditioner and (ii) measures the variance of gradients across all coordinates as well as jointly between coordinates to adapt the step size per coordinate accordingly unlike the fixed step size rule for AdaGrad-Norm. The authors must be precise with their claim and provide an analytical verification.
Methods And Evaluation Criteria: Yes. They compare methods using their regret bounds and dependence of those bounds to dimension, problem structures etc.
Theoretical Claims: I have checked the proof in details until Section 4 and also went over the proofs for Section 4. I haven’t spotted any mistakes.
Experimental Designs Or Analyses: Yes. The experiments show some interesting results but unfortunately they are based on synthetic data (and for a matrix problem). Therefore, it is not enough to make any meaningful conclusion for the algorithms. I would be very surprised to see AdaGrad-norm beating diagonal version for a larger vector regression problem with real data (full-matrix would be too expensive).
Supplementary Material: I went through the appendix completely.
Relation To Broader Scientific Literature: Adaptive methods are the main workhorse in many large scale problems. For instance, adaptive methods outperform SGD for training transformer-based models.
1. The paper attempts to unite the preconditioning of adaptive methods and understand their regret/convergence bounds based on how the preconditioner class is constructed.
2. They explain the possible effects of structure (i.e., size of the set of preconditioners) under certain scenarios.
3. They give a concrete example for Shampoo algorithm and propose a provably better version in terms of regret bounds.
Essential References Not Discussed: -
Other Strengths And Weaknesses: Strengths:
1. I think the authors’ attempt at unifying adaptive preconditioning using matrix sub-algebra on top of positive semi-definiteness is an interesting perspective into understanding the set structure of different preconditioned algorithm classes.
Weaknesses:
1. The claims about AdaGrad-class of algorithms is not verified mathematically, as opposed to the proposed variant of Shampoo that has better regret than the original algorithm. Similarly, the experiments are designed for matrix problems and might not capture the advantages of preconditioning for AdaGrad family.
2. Although I find the definition of well-structuredness genuine, it is not clear how we could use this framework to improve existing methods and develop new algorithm. The recipe is not clear, which the authors could explain more in the main text.
Other Comments Or Suggestions: 1. I don’t agree that full-matrix AdaGrad is considered as the ideal preconditioning method and there is no mathematical evidence that supports the argument. Please update the first paragraph accordingly or verify the claim with references, examples or derivations.
2. Please clarify the idea of structure for preconditioners clearly, before using the term for the first time in the introduction. In the current version, it is confusing as "structure" has different connotations in the context of preconditioned methods.
Questions For Authors: 1. Have you tried running Shampoo and one-sided Shampoo? It would make more sense to see them side by side on a matrix problem, as it is not directly comparable to AdaGrad family in an apples to apples sense.
2. Have you compared AdaGrad family of algorithm for a vector problem? It is more informative to observe their performance for a regression problem, for instance, with a real-world dataset in terms of the comparison of regret bounds.
3. Can you prove that AdaGrad-norm has better regret bounds than other variants using your framework?
4. Do you propose that there is a variation of AdaGrad family (similar to what you do for Shampoo) that will yield better regret guarantee?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for recognizing the value of our unified framework. We will address your concern below.
**Clarification of our results:** We never claim that the regret bound for the full-matrix AdaGrad is worse than AdaGrad-norm or AdaGrad with diagonal preconditioner and we apologize if any words cause such misunderstanding. We only argue that it is a misleading belief that **adding more structures to the preconditioner will always lead to worse performance**. We use the specific comparison between one-sided Shampoo and full-matrix AdaGrad to illustrate this clearly. And we only focus on optimization speed, specifically characterized by regret bound or convergence rate. Other properties such rotation-equivariance are beyond the scope of this paper.
**Common belief that full-matrix preconditioner is “ideal”:** First we would like to clarify we agree with the reviewer on the statement that full-matrix preconditioner is not necessarily ideal as there is no theoretical justification on that. However, this is a common belief held by many researchers in the community. For example, Agarwal et al., 2019 explicitly argues that full-matrix preconditioning allows for better exploitation of anisotropic curvature in loss landscapes and their experiments show that full-matrix version can work better than Adam in some cases. The belief that full-matrix preconditioners are “ideal” is also implicit in the design of many optimizers, such as Shampoo, and Shampoo^2, which all attempt to approximate the full-matrix version more efficiently.
**Vector vs Matrix Problems:** The reviewer argues our comparison is unfair when using a matrix-structured problem to highlight the benefits of one-sided Shampoo. We disagree with this interpretation. First, vector and matrix problems are mathematically **equivalent** up to reshaping and vectorization. Second, the models in practice are often built by complicated matrix-based modules and thus our formulation is relevant to practice.
Most importantly, we do **not** claim that one-sided Shampoo is better than the AdaGrad family in general. We only demonstrate that it can be better on certain losses since it already sufficiently supports our claim. We agree that AdaGrad family may perform better on simple vector problems, which doesn’t contradict our argument.
**Comparison within AdaGrad family:** Our goal is **not** to provide a comprehensive comparison among all AdaGrad variants. As mentioned above, we never make any claims within the AdaGrad family. That said, we found that AdaGrad-Norm and diagonal AdaGrad can outperform full-matrix AdaGrad empirically from figure 1. We are also happy to clarify that AdaGrad-Norm can outperform full-matrix AdaGrad in some setting in response to the next question.
**Can AdaGrad-Norm be the best:** Yes. We can show that the regret bound of AdaGrad-Norm is the best for a specific setting when $\mathcal{X}$ is the $\ell_2$-norm constrained ball. Due to space constraints, we are happy to share the complete proof in follow-up discussion.
Even in practice, Xie et al., 2025 shows that AdaSGD (EMA version of AdaGrad-Norm) can outperform rotated Adam on a GPT-2 model. While rotated Adam differs from the original, it shows an important insight that supports our claim: **coordinate-wise adaptivity is not always better the global adaptivity**. We think this is even a better example than a regression problem with real data for which AdaGrad-Norm can beat diagonal AdaGrad.
**Definition of Structure:** We agree that the notion of “structure” can be more clearly explained even though we have discussed this in line 29-40. In our usage, a structured preconditioner imposes constraints on the preconditioning, formalized via a subalgebra $\mathcal{K}$. As mentioned in the abstract and introduction, some examples of structured preconditioner includes layerwise, diagonal and kronecker-factored. In contrary, full-matrix preconditoner is the least structured preconditioner because there is no constraint.
**Experiments on Shampoo:** Please see the response to reviewer N1Ca.
**Insights for Developing new algorithms:** Please see the response to reviewer N1Ca.
References
1. Efficient full-matrix adaptive regularization. (Agarwal et al., ICML 2019)
2. Adam Exploits $\ell_\infty$-geometry of Loss Landscape via Coordinate-wise Adaptivity. (Xie et al., ICLR 2025)
---
Rebuttal Comment 1.1:
Comment: **Clarification of our results:** To clarify my point, let’s consider the structure of the preconditioner and regret bounds particularly for AdaGrad-family. The regret bound for full-matrix is better than the others (ignoring the cost of full-matrix inversions). They might clearly coincide in certain scenarios, which are of little interest as such simple functions are not encountered in complex, non-convex network architectures. Therefore, I don’t think we are on the same page on this matter. Maybe you could explain your point with an example where the regret bound for a more structured method (AdaGrad-Norm) could be better than full-matrix AdaGrad.
**Vector vs Matrix Problems:** Technically speaking, there is a difference between treating matrices as they are vs vectorizing them in the context of preconditioners. For instance, let’s take Shampoo. As Shampoo is clearly developed for matrix problems, writing it in the vectorized form gives us a matrix preconditioner which is written as the Kronecker product of 2 matrices, which is structurally different than AdaGrad-type updates for vectors. I understand that one can apply AdaGrad on matrices by flattening, but comparing it to a method which is designed for matrix-valued variables might not be fair.
**Comparison within AdaGrad family:** One can see observe empirical gains for the more structured preconditioners on less structued ones, but this needs more detailed empirical analysis to explain the reason behind it. It is not completely clear to me how this behavior could be consistently observed or whether this is a tuning/initialization related matter?
**Can AdaGrad-Norm be the best:** Could you please share the regret bounds for all three AdaGrad-family methods where AdaGrad-Norm is claimed to be the best?
**Insights for Developing new algorithms:** Thank you for the clarification. Please include this discussion in the main text for the final version of the paper.
---
Reply to Comment 1.1.1:
Comment: **Clarification of our result:** To clarify, we only compare regret bounds shown in table 1, which matches the standard regret bound for AdaGrad family in previous papers. We restate the results below for reference.
| Algorithm | Regret Bound |
| --- | --- |
| AdaGrad-Norm | $\|\|\mathcal{X}\|\|\_2 \sqrt{\sum_{t=1}^T \|\|\mathbf{g}_t\|\|_2^2}$ |
| Diagonal AdaGrad | $\|\|\mathcal{X}\|\|\_\infty \sum_{i=1}^d \sqrt{\sum_{t=1}^T g_{t,i}^2 }$ |
| Full-matrix AdaGrad | $\|\|\mathcal{X}\|\|\_2 \text{Tr} [(\sum_{t=1}^T \mathbf{g}_t \mathbf{g}_t^\top )^\frac{1}{2}]$ |
| One-sided Shampoo | $\|\|\mathcal{X}\|\|\_{op} \text{Tr} [(\sum_{t=1}^T \mathbf{G}_t \mathbf{G}_t^\top )^\frac{1}{2}]$ |
As we can see from this table, the regret bound of AdaGrad-Norm is always no larger than that of full-matrix AdaGrad, because it holds that $$\sqrt{\sum_{t=1}^T ||\mathbf{g}\_t||\_2^2} = \sqrt{\text{Tr} (\sum_{t=1}^T \mathbf{g}\_t \mathbf{g}\_t^\top)} \leq \text{Tr} [(\sum_{t=1}^T \mathbf{g}_t \mathbf{g}_t^\top)^{\frac{1}{2}}].$$ Here the inequality holds because $\sqrt{\text{Tr}(A)} \leq \text{Tr}(A^{\frac{1}{2}})$ for any positive semi-definite matrix $A$.
**Can AdaGrad-Norm be the best:** Again we will focus on the regret bounds in table 1. As mentioned in our first response, when $\mathcal{X}$ is chosen to be the $\ell_2$-norm ball $\\{ ||\mathbf{x}||_2 \leq r \\}$, AdaGrad-Norm has the smallest regret bound. We have already proved that AdaGrad-Norm always has smaller regret bound than full-matrix AdaGrad for any choice of $\mathcal{X}$. We will compare with diagonal AdaGrad and one-sided Shampoo below.
- Comparison with diagonal AdaGrad
For this set, we have that $||\mathcal{X}||\_\infty=\max_{\mathbf{x} \in \mathcal{X}} ||\mathbf{x}||\_\infty = r =||\mathcal{X}||\_2$. On the other hand, it holds that $$\sqrt{\sum_{t=1}^T ||\mathbf{g}\_t||\_2^2} = \sqrt{\sum_{i=1}^d \sum_{t=1}^T g_{t,i}^2 } \leq \sum_{i=1}^d \sqrt{ \sum_{t=1}^T g_{t,i}^2 }. $$ The inequality is because $\sqrt{\sum_{i=1}^d a_i} \leq \sum_{i=1}^d \sqrt{a_i}$. Therefore, the regret bound of AdaGrad-Norm is no larger than diagonal AdaGrad for this choice of $\mathcal{X}$.
- Comparison with one-sided Shampoo
We start with computing $||\mathcal{X}||\_{op}$. For any matrix-valued $X \in \mathcal{X}$, $||X||\_{op} \leq ||X||\_F = ||\text{vec}(X)||\_2=r$. When $X$ only has one non-zero element $X\_{1,1}=r$, its operator norm is exactly $r$. So $||\mathcal{X}||\_{op} = \max_{\mathbf{x} \in \mathcal{X}} ||\mathbf{x}||\_{op}=r$. On the other hand, it holds that $$\sqrt{\sum_{t=1}^T ||\mathbf{g}\_t||\_2^2} = \sqrt{\text{Tr} \sum\_{t=1}^T \mathbf{G}\_t \mathbf{G}\_t^\top } \leq \text{Tr} [(\sum_{t=1}^T \mathbf{G}_t \mathbf{G}_t^\top)^{\frac{1}{2}}]. $$ The inequality is again because $\sqrt{\text{Tr}(A)} \leq \text{Tr}(A^{\frac{1}{2}})$ for any PSD matrix $A$.
**Vector vs Matrix Problem:** Our example above can be equivalently cast as a vector problem, thus providing a concrete example of vector problems that shows the advantage of AdaGrad-Norm using a more structured preconditioner.
**Empirical Comparison:** We are glad that you agree with the empirical gains of more structured preconditioner. We don’t expect or aim to see more structured preconditioner **consistently** works better. In contrast, **what we argue is that more structured preconditioner is not necessarily always worse**, and we already see several empirical examples that can support the claim we draw based on theoretical analysis.
**Insights for developing new algorithms:** We will include the discussion in the revision. | Summary: The paper provides regret bounds for a family of adaptive algorithms with structured preconditioner matrices. The analysis generalizes the technique introduced in the original Shampoo work and applies to Adagrad, Adagrad-norm, Adagrad-diag and one-sided Shampoo. Intriguingly, the paper shows that, for certain loss functions, the bound provided by one-sided Shampoo could be tighter than the bound for full-matrix Adagrad and also demonstrates this empirically on a simplified loss surface.
Claims And Evidence: The paper mostly has theoretical contributions and provides a simple empirical experiment supporting the theory.
Methods And Evaluation Criteria: N/A
Theoretical Claims: I took a brief look and verified (not thoroughly) all the proofs in the Appendix
Experimental Designs Or Analyses: N/A
Supplementary Material: I looked at the proofs of various theorems in the Appendix.
Relation To Broader Scientific Literature: This work is well placed as a generalization of previous analysis of structured preconditioner matrices.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The main strength of the work is proposing a general unified analysis for various preconditioner matrices, and showing that the bounds for a more general class could be worse than a structured one. Although this should not be considered a weakness, the analysis does not lead to an improved algorithm. This could be considered as a part of future work.
Other Comments Or Suggestions: I would suggest adding two sided Shampoo and Shampoo^2 (based on the recent connections shown by Morwani et al. 2024) to the experimental plot.
Questions For Authors: 1. What do the authors think are plausible directions for using these bounds in designing better optimizers?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging the value of our unified analysis. Below we will address your concerns.
**Comparison with Shampoo and Shampoo^2:** We appreciate the suggestion to include two-sided Shampoo and Shampoo^2 in the experiments. As reviewer tCyu pointed out there is lack of motivation for our current experiment, we conduct experiments on a better setting which is more practical and better motivated by our theory analysis. We will add this experiment and focus more on it in the future revision.
We consider a synthetic linear regression problem $||\mathbf{A} \mathbf{X} - \mathbf{y}||\_2^2$ where $\mathbf{A}$ is the data matrix and $\mathbf{y} =\mathbf{A} \mathbf{X}^*$ is the label vector generated by ground-truth $\mathbf{X}^*$. Thus, the loss function can be equivalently written as $f(\mathbf{X}) = \langle \mathbf{H}, (\mathbf{X}-\mathbf{X}^*)(\mathbf{X}-\mathbf{X}^*)^\top \rangle$ for $\mathbf{H} = \mathbf{A}^\top \mathbf{A}$, which is the same function that we studied in section 4.3 and show that 1-sided shampoo outperforms other adaptive algorithms. We consider $\mathbf{X} \in \mathbb{R}^{d \times d}$ with $d=10^3$. We set the eigenvalues of $\mathbf{H}$ by $\sigma_1 = \cdots = \sigma_{10}=1$ and $\sigma_i = \frac{1}{(i-10)^2}$ for $11 \leq i \leq 10^3$. Each element of the solution $\mathbf{X}^*$ is independently sampled from $\mathcal{N}(0, \frac{1}{d})$.
We compared all the optimization algorithms by sweeping learning rate from 1e-4 to 1e2. The plots are [here](https://docs.google.com/document/d/e/2PACX-1vR37RVh5tPyZFbelExgmMfoXk_y7Egv0TvxXw4WqzVJ9zYHECzzRYjv-2zE_QGwcbkBYvXFCSSrweVl/pub). The results are consistent with the original experiment: (1). **one-sided Shampoo outperforms other variants of AdaReg;** **(2). Shampoo is slightly worse than one-sided Shampoo**; **(3). Shampoo^2 fails to make progress in optimizing this loss function.** We are unsure why Shampoo^2 underperforms on this loss function but note that Morwani et al. (2024) also does not provide empirical evidence of the effectiveness of Shampoo^2. Investigating this behavior is beyond the scope of our current work, but may be a worthwhile direction for future study. The implementation of Shampoo and Shampoo^2 is [here](https://limewire.com/d/4fwuX#gtS4Jg1Wpi).
**One-sided Shampoo as a new, improved algorithm:** We disagree with the reviewer’s opinion that this work doesn’t lead to an improved algorithm. To the best of our knowledge, we are the first to explicitly define and analyze the one-sided Shampoo algorithm even though it is informally mentioned on social media. And our section 4.2 shows that it improves original Shampoo theoretically and section 4.3 shows it can achieve the best rate on some specific loss functions, which is empirically verified by our added experiment above.
**Plausible directions for designing better optimizer:** Thanks for this insightful question. We believe the definition of well-structured preconditioners can help designing novel optimization algorithms as discussed in line 261-308 though they may not necessarily be better.
Notably, our framework already encompasses recent algorithms such as Adam-mini (Zhang et al., 2024) and Adalayer (Zhao et al., 2024) via specific choices of the subalgebra $\mathcal{K}$. Inspired by these two examples, we identify three basic operations that can construct new matrix subalgebras that can be used for defining well-structured preconditioner.
1. **Direct sum** of matrix subalgebras. This is defined in line 261-308.
2. **Kronecker product** between a matrix subalgebra and identity matrix. For a matrix subalgebra $\mathcal{K}$, we can define $\mathcal{K}’$ by $\mathcal{K} \otimes \mathbf{I}_d = \\{ \mathbf{A} \otimes \mathbf{I}_d | \mathbf{A} \in \mathcal{K}\\}$.
3. **Rotations** via orthogonal matrices. For a matrix subalgebra $\mathcal{K}$ and an orthogonal matrix $\mathbf{U}$, we can define $\mathcal{K’}$ by $\mathbf{U}^\top \mathcal{K} \mathbf{U} = \\{ \mathbf{U}^\top \mathbf{A} \mathbf{U} | \mathbf{A} \in \mathcal{K} \\}$. Such $\mathcal{K’}$ is still a matrix subalgebra.
These operations offer a principled way to design new preconditioners and optimizers. As noted in our conclusion (lines 430–435), a theory-driven strategy would be to evaluate terms in the regret bound or convergence rate under different $\mathcal{K}$ and choose $\mathcal{K}$ to optimize the trade-off. While this remains challenging to evaluate empirically on large models, we view this as an interesting direction for future work. | null | null | null | null | null | null | null | null |
KoNODE: Koopman-Driven Neural Ordinary Differential Equations with Evolving Parameters for Time Series Analysis | Accept (poster) | Summary: **Edit**: Having read through the rebuttals and the other reviews, I am satisfied with the paper and have increased my score from 3 (weak accept) to 4 (accept). I am grateful to the authors for taking the time to respond, and have left remaining specific thoughts in the Rebuttal Comment.
**Original Review**:
This paper introduces Koopman-Driven Neural Ordinary Differential Equations (KoNODEs). The motivation is that Neural ODEs given by:
$\frac{d\mathbf{x}}{dt} = f(\mathbf{x}, t, \theta)$
Can benefit from making the parameters time-dependent:
$\frac{d\mathbf{x}}{dt} = f(\mathbf{x}, t, \theta(t))$.
One example being that the underlying dynamics might remain unchanged, but the parameters of the dynamics do, for example in engineering the wear and tear of instruments can change the observed dynamics, even though the underlying physics remains unchanged.
An existing solution is ANODE-V2 that models $\frac{d\mathbf{x}}{dt} = f(\mathbf{x}, t, \theta(t))$ using a neural network and $\frac{d\theta}{dt} = g(\theta, t, \phi)$, so the network parameters are given by another ODE.
This work adds further modeling restrictions to the ODE with the aim of modeling more complex behaviors, for longer time series. This is achieved by using Koopman theory, which at a high level says that non-linear dynamics can be achieved by using linear dynamics in a different space. This paper explores how this can be achieved in practice for Neural ODEs, provides an implementation, theoretical analysis and experimental evidence showing modeling Neural ODEs using Koopman operators is an effective way to restrict the dynamics of the parameters to actually improve the model.
Claims And Evidence: The claims presented are supported by experimental evidence. However, there are some relevant baselines that would improve the strength of the evidence. For example ODE-RNN or Latent ODE (https://arxiv.org/abs/1907.03907), or Neural Processes (https://arxiv.org/abs/1807.01622) as another method. These are common baselines that would improve the experimental results.
I am also concerned with Figure 3, which shows vanilla NODE not performing on the spiral synthetic dataset. Why is this the case since it does perform in the original paper?
Methods And Evaluation Criteria: The experiments carried out make sense for the application. The datasets are a good mix of synthetic to demonstrate key points and real to show that the method works on real data.
Theoretical Claims: The theoretical claims as far as I can tell are correct. I have not checked the proofs in the appendix in significant detail. However, I am confused by Line 254 on the left:
$-\text{max}\_{m < i \leq N}\log\_i \xi\_i - 1$
What does it mean for a logarithm to be subscripted? Previously it is given that $\xi = [\langle g, u\_1 \rangle, \langle g, u\_2 \rangle ... \langle g, u\_m \rangle]$, so how can $i$ be larger than $m$ and $\leq N$? Especially given that $n$ is the amount of data, but $\xi$ seems to be the dimension $m$ of the Koopman space?
Experimental Designs Or Analyses: The experiments and analyses associated are sound. But as mentioned would be improved with stronger ODE baselines (e.g. ODE-RNN, Latent ODE) and stronger baselines for generalising to new dynamics (e.g. Neural Processes).
Supplementary Material: I read the additional related work, inspiration of KoNODE, and the additional experiments.
Relation To Broader Scientific Literature: The key contributions are framed appropriately in relation to the scientific literature.
Essential References Not Discussed: There are none as far as I can tell.
Other Strengths And Weaknesses: Stengths:
- The idea is robust and theoretically motivated
- The datasets are diverse and relevant
- The experiments support the claims
Weaknesses:
- The writing needs some work, see the Other Comments or Suggestions section of the review for specific cases. The description of KoNODE at points is overly complicated, or unnecessary. For example, at the beginning of page 4, one possible $D$ matrix is described in detail, before describing a second $D$ matrix with $2\times2$ block diagonals, the first version is never used and therefore is not necessary to describe. Section 3.2.2 is another section that overly complicates the description, where the point is to say that the adjoint sensitivity method is used to calculate gradients, which is already described in detail in the original Neural ODE paper
Other Comments Or Suggestions: This is a list of contained writing changes I would make. They are suggestions so are not required:
- Line 17 Right: "The Ordinary Differential Equations" should be just "Ordinary Differential Equations"
- Line 19 Right: "of ODEs" should be "of the ODE"
- Line 38-39 Right: "encodes the deeper underlying dynamical principles of the evolution (namely deep-level information) inherently" should be "inherently encodes the underlying dynamics of the evolution (namely deep-level information)"
- Line 59 Left: "Based on the insight" should be "Base on this insight"
- Line 80-81 Left: "and ultimately governed by a deeper-leveled, intrinsic linear core - the Koopman linear dynamics" should be "and ultimately governed by the underlying Koopman linear dynamics"
- Line 66 Right: "Applying the Koopman Theory" should be just "Applying Koopman Theory"
- Line 94-96 Right: ", rather than propagating activations through discrete layers as in recurrent or deep networks. Neural ODEs employ numerical solvers" should be ". Rather than propagating activations through discrete layers as in recurrent or deep networks, Neural ODEs employ numerical solvers"
- Line 124 Left: "The Koopman Theory" should be "Koopman Theory"
- Line 124-125 Left "dynamic system analysis" should be "dynamical system analysis"
- Line 141 Left: "commonly finitely approximated" can be just "commonly approximated" since the phrase "finite number" is used later in the sentence
- Line 144 Left "The Koopman operator are" should be "The Koopman operator is" or "The Koopman operators are"
- Line 112 Right: "within the NODE setups" should be "within the NODE setup"
- Line 117-120 Right: "Besides, we assume the trajectories of the intrinsic linear dyanmics are represented as w(t)" should be "We let the intrinsic linear dynamics be represented as w(t)"
- Line 126-144 Right: Describing the three hierarchical levels should refer to the "dynamics" not the "dynamic"
- Line 158 Right: The sentence "We model the h by the neural network due to its flexibility" is not needed, but should be "We model h using a neural network due to its flexibility"
- Line 169 Left: "in replace of the conjugated imaginary eigenvalues of A" should be "that represent the imaginary eigenvalues of A"
- Figure 2 caption: Should refer to the "dynamics" not the "dynamic"
- Figure 2 caption: "over altogether N time points" should be just "over N time points"
- Section 4 description: "just as the red line" and "just as the green line" should be "shown by the red line" and "shown by the green line"
- Line 287 Left: "At last, we get the loss between the predicted trajectory and the true trajectory", should be "Using the true trajectory and the predicted trajectory, we can calculate the loss"
- Line 295-300 Left: "We serve w0 as an encoding" should be "w0 is an encoding" and "Learnable setting when given sequence as input" should be "Learnable setting when given a sequence as input"
- Line 375 Left: "Preformance metric" should be "Performance metric"
- Line 405 Left: "should inherently low-dimensional" should be "should inherently be low-dimensional"
- Lines 433-439 Left: When using quotation marks in Latex they can be properly displayed by changing "S" to ``S''
- Line 420 Right: "ETTh1" should be "ETTH1"
- Line 427 Right: "and the Koopman modeling" should be "and Koopman modeling"
Questions For Authors: 1. Is it possible to represent KoNODE as another larger first order ODE:
$z = [x, \theta, w]$, $\frac{dz}{dt} = [f(x, t, \theta), \frac{\partial h}{\partial w}Aw, Aw]$, $z\_0 = [x\_0, h(w\_0), w\_0]$
If this is the case, it might be easier to write $\frac{\partial h}{\partial w}Aw$ as $h\_1(w)$ and $h(w\_0)$ as $h\_2(w\_0)$. That way the whole system can be solved together in one ODE solve, which will likely reduce the computation required per solve. This also makes the adjoint sensitivity derivation trivial since the standard adjoint sensitivity can be applied to the extended ODE system.
2. In Figure 4C, why does error increase as the dimension of $w$ increases. Is it hard for KoNODE to learn redundancy? Or is it encouraged to use all of its dimensions to learn the underlying Koopman dynamics? The theoretical bound presented says that the error decreases with more dimensions, why is this not the case? Does the model overfit when it has more capacity, i.e. $N$ is not large enough, so $r$ is negative in Theorem 3.2 so the error bound does not decrease with $m$?
3. Can you rephrase your argument around Table 4, the argument is that the Euler method can benefit from KoNODEs, with more accurate dynamics. However this does not make sense to me. Firstly Table 4 shows that the Euler method is also better using NODE, so it does not empirically hold. Additionally, RK4 and DOPRI5 should be more accurate than the Euler method during the ODE solve, so should also benefit from improved modeling of the dynamics.
Based on the answers to these questions and the others mentioned I will increase my evaluation.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your deep understanding of our method and insightful suggestions.
**[1. comparison to baselines]**
We have included the baselines mentioned, Latent ODE (ODE enc) and Neural Processes, in `supplementary_table_reviewers_Snqd_waH1.pdf`, https://anonymous.4open.science/r/KoNODE-D8F2/. We didn't compare ODE-RNN due to its official code not supporting prediction tasks.
---
**[2. failure on spiral synthetic dataset]**
Lines 283-285 to the right clarified that the spiral dataset we were using is obtained by $[\frac{dx}{dt}\ \frac{dy}{dt}]^⊤= A^⊤(t)[x^3\ y^3]^⊤$ where $A(t)=\begin{bmatrix}-0.1+0.5\sin t&2.0+\cos t\\\\-2.0+0.5\cos 2t&-0.1-0.5\sin t\end{bmatrix}$, which is more complicated than the one used in vanilla NODE, i.e., $ A(t) = \begin{bmatrix}-0.1&2.0\\\\-2.0&-0.1\end{bmatrix}$. We did successfully reproduce fittings for the spiral proposed in both ours and NODE yet the modified data is more complicated so would demonstrate the limitation of the conventional model.
---
**[3. $\xi_i$ in line 254]**
Sorry for the negligence in the notation. We will clarify in the revision that $\xi=[\xi_1\ \xi_2\ \cdots\ \xi_m]^⊤$ while the coordinates $\xi_i≜\langle g, u_i\rangle$ even for $i>m$ as the true Koopman space has an infinite number of bases $\left\\{u_i\right\\}_{i=1}^\infty$ (though no more than $N$-dimensions due to the data-driven scenario).
---
**[W1. writing]**
Thank you for the valuable suggestions. We will modify accordingly (including the minor fixes and the simplifications of the KoNODE description and Section 3.2.2).
---
**[Q1. larger ODE expression]**
It is a promising insight that will definitely be easier to write. However, this pipeline sacrifices the consistency of function $h$ and does not significantly reduce the computational cost (per solve). The reasons are: (1) In the forward integration, both pipelines should go through function $f$, compute $Aw$, and another network being either $h$ or $h_1$; (2) In the adjoint method, both pipelines require the gradient through network $h$ (or $h_1$) which requires a backpropagation through them; (3) The two networks, $h$ and $h_1$, both require the potential to map from the $w$-space to the $θ$-space, hence the sizes of the networks do not differ much.
---
**[Q2. Fig. 4C]**
Thank you for your deep understanding of our method. Fig. 4(c) shows a curve dropping in $[2, 10]$ and rising if $m > 10$ with a tolerable relative perturbation. Experimentally, more weights are introduced in networks, which may add extra uncertainty and instability during training when $m$ is large and [W2 & Q2. selection of Koopman dimension], *Response to Reviewer iauo* explains the reasons why the practical dimension is not that stable. Theoretically, the error bound in Thm. 3.4, Manuscript is majorly dominated by two terms, one with a factor of $\frac{m^2}{\sqrt N}$ in coefficient $p$ and the other with the order of $m^{-\frac r3}$, which does indicate the instability when $m$ is too large and may infer overfits. On the other hand, the order $r$ is negative only when $m$ is too small that the first $m$ bases $\\{u_i\\}_{i=1}^m$ fail to successfully model the true observable function $g$, hence this scenario only influences the left-hand side of the graph when $m$ is below the theoretical lower bound given in Thm. R.4, [W2 & Q2. selection of Koopman dimension], *Response to Reviewer iauo*.
---
**[Q3. Tab. 4]**
Sorry for the negligence of the phenomenon as the table aimed at showing the more significant improvement from the KoNODE framework using the Euler method. We will rephrase the interpretation in the revision. Regarding the lateral comparison, the integration by RK4 and DOPRI5 is undoubtedly more accurate yet the superiority of Eurler is caused by the different intrinsic objectives under the regularly sampled trajectory, i.e., the network for Euler estimates $\int_t^{t+Δt}f(x(t), t)\ dt$ for a fixed $Δt$ but the others for a mutable $Δt$. The fixed $Δt$ reduces the complexity of the target function and thus simplifies the task using Euler.
---
Rebuttal Comment 1.1:
Comment: Thank you for taking the time to respond. I have read the other reviews and responses, and am generally happy with the paper now, I will raise my score accordingly. Please see remaining comments below:
**Additional Baselines and ODE-RNN**: Thank you for running the additional baselines, as well as the additional baselines for Reviewer Snqd. The results are still good for KoNODE, which is reassuring. I'm not particularly convinced by the reason for not including ODE-RNN, the prediction can simply be a function of the hidden state $y(t) = g(h(t))$, where $h$ follows an ODE-RNN. This should not be too hard to implement. **However**, that being said I also recognise there is already a large variety of baselines, and another baseline is not necessary.
**Spiral Dataset**: Thank you for the clarification, my apologies for not noticing this difference. The paper would be improved with clear communication around this point. That is, at the start of Section 5.1, saying something along the lines of "We adapt the fitting task proposed in Chen et al. 2018, by adding a time dependency to the matrix $A$. We add sines and cosines to produce the following new time-dependent matrix: ..."
**$\xi\_i$ Line 254**: My apologies, but this still is not clear to me. Please can my error in the following reasoning be identified and explained:
- Lines 140-144 Left: In practice the infinite dimensional Koopman space is finitely approximated by modeling a finite number of bases $\mathbf{u} = [u\_1, u\_2,..., u\_m]$, we have $m$ basis functions.
- Line 253 Left: $\mathbf{\xi}=[\langle g, u\_1 \rangle, ..., \langle g, u\_m \rangle]$. These are inner products between the function of interest in Theorem 3.2 $g$ and the finite Koopman bases $u\_i$. Each of these give scalars, and we have to have finitely many since we are doing a practical implementation. So there are $m$ components of $\bf{\xi}$ since there are $m$ basis functions.
- If $\mathbf{\xi}$ is $m$ dimensional then $i$ can only be between $1$ and $m$, not $m+1$ and $N$.
- This theorem is about the "Order-m Koopman Operator", so we have $m$ basis functions, $g$ is the function of interest, and we have $N$ data points. Why do we have new basis functions per data point? The true Koopman space has infinite, but this theorem is about the approximation, where there are finitely many?
It is unclear what I am not understanding, and more broadly, as we have discussed, the paper will benefit significantly in the long-term by making the theory clearer.
**Larger ODE**: This point has been misunderstood, I was really asking about the specific implementation, i.e. the code. It wasn't clear to me if the ODE for $w$ was solved first, and then the ODE for $x$, requiring two ODE solves in series and storing the $w(t)$ trajectory. I made a mistake and realised the combined ODE could be written as:
$\frac{d}{dt}[x, w] = [f(x, t, h(w, \psi)), Aw]$, $[x, w]\_{0} = [x\_0, w\_0]$
Without an unnecessary ODE for $\theta$. Implementing this way requires only one ODE solve and not storing a dense trajectory $w(t)$. Having now looked through the supplementary code I realise this is exactly how KoNODE has been implemented, so I accept I got this wrong. In this case I'm curious what you think would happen if KoNODE is implemented differently:
$\frac{d}{dt}[x, w] = [f(x, t, w(t), \psi), Aw]$, $[x, w]\_{0} = [x\_0, w\_0]$
That is, rather than using $h$ to generate parameters for $f$, could $w$ be used as a concatenated input to $f$ which has fixed parameters $\psi$? The universal approximation theorem would make this valid, but how do you think it would affect learning, parameter efficiency (no $h$ network, not so many evolving $\theta$s), computation?
**Figure 4C**: Thank you for your answer. It sounds like it is a case of overfitting am I correct? How would you suggest tuning $m$, are there good values to begin with relative to $N$ and the dimension of $x$?
**Table 4**: This is not a major concern (it's in the Appendix), and I am still happy with the paper. But I'm still not convinced by this reasoning. No matter the ODE solver, they are predicting $x(t)$ for a fixed set $[t\_1, ... t\_f]$. All that should change theoretically is the accuracy of the solve. In practice of course this affects the gradients and therefore the training, because they produce different trajectories. But if gradient descent is run long enough they should reach similar results. Is it because Euler produces "simpler" trajectories, i.e. it will not focus on significantly complicated parts of the trajectory, and therefore despite being a worse solve it is more stable? Maybe training with Euler at first, then changing to a more accurate solver at test time or even part way through training could be better? This is very much a side point, and I know was not the point of the Table (which was to show that KoNODE is robust to the solver), I just think it is curious, and goes against what I've experienced between solvers.
---
Reply to Comment 1.1.1:
Comment: Thank you sincerely for raising the score and for the time spent. Your intelligent feedback has significantly enhanced the depth and clarity of our manuscript.
**[1. Additional Baselines and ODE-RNN]** Thank you for your thoughtful suggestion. We agree that, theoretically, the ODE-RNN is capable of making predictions. However, we omitted it because the official code raises an error: "Exception: Extrapolation for ODE-RNN not implemented," which prevents a direct comparison. Due to the limited time of the rebuttal, we could not re-implement it from scratch, and we sincerely apologize for this.
**[2. Spiral Dataset]** Thank you for the helpful suggestion and for pointing this out. We will modify the manuscript accordingly.
**[3. $ξ_i$ in line 254]** We apologize for the confusion caused by the notation and thank you for the patient analysis. The first three reasonings are correct. For the fourth one, the basis functions for the true Koopman space is infinite, hence we are able to identify infinite number of $⟨g, u_i⟩$. The coordinate $ξ_i$ in line 254 is the only one in which $i$ can be greater than $m$, we will replace it with $⟨g, u_i⟩$ instead to avoid confusion.
Regarding the questions in reasoning four, we do not have new basis functions per data point. The reason why $i\le N$ is that the space the observable function $g$ in has a maximal dimension of $N$. As Lemma R.3, Response to Reviewer iauo, defines a possible observable function for any $θ$ trajectory to model the dataset, these functions span an observable function space with a dimension no more than $N$. Consequently, $⟨g, u_i⟩=0, ∀ i>N$.
**[4. larger ODE]** Sorry for the misunderstanding and thank you for the clarification. Your current interpretation of the KoNODE implementation is very precise.
We are very much thrilled by the acute insights you possess. Regarding your proposed alternative formulation (Imp. II), we agree that this is a theoretically valid and feasible variant. This formulation resembles the idea behind ANODEs (https://arxiv.org/abs/1904.01681), where additional variables are appended to the state to enhance expressivity.
While both implementations are in principle functionally equivalent under the universal approximation theorem, their motivation differ. Our design (Imp. I) is motivated by a desire to extracte deep-level dynamic structures through $w(t)$. In contrast, Imp. II treats $w(t)$ as a refined input to enhance the model’s capacity, rather than explicitly encoding the system’s intrinsic dynamics. Technically speaking, any output neurons link to $w(t)$ in Imp. I while they may disconnect to it and only be activated by $x(t)$ using Imp. II.
The following discussed practical factors.
(1) Learning Dynamics. When $w(t)$ is concatenated with $x(t)$, its influence on the dynamics must be learned implicitly through a fixed network $f$. Without architectural constraints, this may often lead to learned representations that are dominated by $x(t)$, with $w(t)$ playing a minor role, especially when $w(t)$ is low-dimensional.
(2) Parameter and Computational Efficiency. Although Imp. II appears simpler, it places the full burden of modeling complex, time-varying interactions on a single network $f$, which may increase training time or convergence difficulty. As highlighted in [1], effectively modeling intricate dependencies from low-dimensional auxiliary inputs (like $w(t)$) often requires large networks. By separating the responsibilities—$h$ modeling latent evolution and $f$ focusing on observable dynamics—Imp. I achieves better parameter specialization.
**[5. Figure 4C]** Yes, "overfitting" is indeed precise. A suggested selection criterion of $m$ is increasing it from $⌈\frac D{D-r}⌉$ where $D$ is the model size and $r<\min\{n, N\}$ is the rank of data (described thoroughly in Theorem R.4, Response to Reviewer iauo) until the performance does not improve. An empirical choice is 10 for simple dynamics in 2D space and 100 for high-dimensional ones.
**[6. Table 4]** We believe the superiority of the Euler solver is caused by the technical reason that the regression w.r.t. low-dimensional inputs (such as $t$) tends to be more difficult for neural networks to model [1]. Specifically, the Euler method only models $f(x(t),t;θ(t))$ but high-order methods RK4 and DOPRI5 require the model of $f(x^*,t^*;θ(t))$ where $x^\*≠x(t^\*)$ which rely on more sensitive modeling of $t$, making these models suffer more from the intractability caused by the low-dimension of time. The reason why the correspondence of $t$ and $x(t)$ relies less on the modeling of $t$ is implied by the mapping $t_s(⋅)$ from state to time in Lemma R.3, Response to Reviewer iauo.
As for the further questions, (1) yes, we agree that the Euler solver may yield more stable results; (2) regarding switching solvers later, we believe this is interesting and worth exploring.
[1] See link: https://link.springer.com/chapter/10.1007/978-3-030-47358-7_27 | Summary: An architecture based on neural ODE with time-varying parameters is proposed. The idea is to model the dynamics of the parameters with latent linear dynamics, which the authors motivate by referring to the Koopman operators. The superior prediction performance of the proposed method is empirically demonstrated.
### Update after rebuttal
Thanks for the clarification, the additional discussion will make the paper more convincing. I keep my originally positive score.
Claims And Evidence: I found two points in the claims. One is about its practical utility, i.e., better performance in prediction. I think this aspect is nicely demonstrated with the empirical results.
The other point of the claims I found is about "understanding" of data or models. For example, the authors claim:
> We propose a three-level hierarchical architecture ... that deepens our understanding of system behavior and its governing laws.
> By leveraging Koopman operators, our framework could use spectral tools for system analysis ...
Although I do not deny such a possible utility of the method in general, these aspects do not seem supported by concrete empirical observations.
Methods And Evaluation Criteria: The time-series prediction experiments seem valid.
Theoretical Claims: A bunch of theoretical claims are presented in Section 3.3 and the appendices. They may be okay, but I do not see how relevant they are to the proposed method. The presented theories are on the estimation error of the Koopman operators, but the proposed method consists of not only the Koopman part but also other components such as the map from $w$ to $\theta$. It would be nice if the authors could elaborate more on the motivation of the theoretical arguments: for example, the overall picture (purpose) of the analyses, key assumptions, and remaining gap to fully analyze the prediction errors of the proposed method, if any.
Experimental Designs Or Analyses: As mentioned in the "Methods And Evaluation Criteria" section, the experiments make sense.
Supplementary Material: I took a quick look at the appendices but did not check any details.
Relation To Broader Scientific Literature: Time-series prediction is widespread in any domain of science and engineering.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: I do not have major questions that strongly affect my evaluation.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful comments.
**[1. empirical observation supports]**
We have added empirical evidence and analysis to support the two quoted claims. We will include concrete empirical observations in the revision. Please refer to [W1. lack sufficient interpretability], *Response to Reviewer Snqd* for further details, and `spectrum_reviewers_Snqd_1pn6.pdf`, https://anonymous.4open.science/r/KoNODE-D8F2/ for the visualization.
---
**[2. theoretical claims]**
Thank you for the suggestion and we are sorry for the confusion. We will clarify two motivations in Section 3.3 in the revised manuscript, i.e., the presented error provides (1) a theoretical reference for the choice of dimension $m$, and (2) proof for the minority of errors caused by the additional Koopman module compared to conventional ODE models, apart from the potential improvement due to the advanced fitting ability.
The theoretical claims are given under the assumption that the parameters $θ(t)$ indicate the running dynamics, which are intrinsically determined for a given trajectory $x(t)$. Consequently, an accurate model of $θ(t)$ directly leads to the accurate estimation of $x(t)$.
Thm. R.6 below conducts a theoretical comparison of the overall prediction error between the proposed method and a conventional ODE based on the given theories to indicate the superiority of our method.
**Definition R.5 (The Static Dynamic Model Condition)**
The dynamic satisfies the condition if and only if for the function $g$ and any $\tildeε$, $∃\tildeδ>0$,
$$
ℙ(|g(θ^*) - g[\tildeθ(t)]|<\tildeε)\ge 1-\tildeδ,
$$
where $\tildeθ(t)$ is randomly drawn from the ideal trajectory defined in Eq. (1), Theorem R.4, [W2 & Q2. selection of Koopman dimension], *Response to Reviewer iauo* and $θ^*≜\underset{θ}{\text{argmin}}\int_t\left\Vert f(x(t), t;θ)-f(x(t), t;\tildeθ(t))\right\Vert$ is the LSE estimation of a static $θ$. The condition holds when the true dynamic behind the trajectory is static which ensures a good fit of the NODE.
**Theorem R.6**
During the modeling of trajectory $x(t)$, the estimation error upper bound of the proposed model is smaller than that of a conventional ODE method if the observable function $g$ does NOT satisfy *the static dynamic model condition*, i.e., for the bound
$$
\mathcal B[\hat{θ}(t)]≜\sup_{θ}\left\Vert\frac{∂f(x(t), t;θ)}{∂θ}\right\Vert⋅\Vert\hat{θ}(t) -\tilde{θ} (t)\Vert,\tag{2}
$$
there exists,
$$
ℙ(\mathcal B[{θ}^†(t)]\le\mathcal B[θ^*])\ge1-δ^*,
$$
where $θ^†(t)$ is the trajectory predicted by the proposed method while $θ^*$ consists of the static parameters in the conventional ODE method.
*Proof:* We use the notations in Def. R.5. Apparently, Eq. (2) gives a bound of the estimation error where the "sup" exists as $f$ is Lipchitz continuous, and the bound is reached for linear mapping $f$.
Note that $\hat{θ}(t)=θ^*$ in NODE and $\hat{θ}(t)=θ^†(t)≜h(\hat{w}(t);ψ)≈g^{-1}(\xi ^⊤\hat{w}(t))$ in the proposed Koopman model where $\hat{w}(t)$ satisfies the Koopman model of matrix $\hat{K}_N$. Thm. 3.4, Manuscript shows that
$$
ℙ(|\xi ^⊤\hat{w}(t+Δt) - g[\tilde{θ}(t+Δt)]|<ε^*⋅\Vert\xi\Vert⋅\Vert\hat{w}(t)\Vert )\ge 1-δ,
$$
with $ε^*$ representing the RHS . As the base $u_i(t)\in L^2(Θ)$ for observables $\hat{w}$, Lemma C.12, Appendices implies $ℙ(|\hat{w}_i(t)|\le\frac{1}{\sqrtδ} )\ge 1-δ$ if $\Vert u_i(t)\Vert _{L^2(Θ)} = 1$ and $\max|\xi _i|\le r(\mathcal K)$,
$$
\textstyleℙ(\Vert\xi\Vert⋅\Vert\hat{w}(t)\Vert\le\frac{m\sqrt m⋅r(\mathcal K)}{\sqrtδ})\ge 1-δ.
$$
Consequently, if we let
$$
\tildeε=\frac{2\sqrt3σm^{\frac{7}{2}}r^2(\mathcal K)}{\sqrt{Nδ(δ-2m^{-\frac{r}{3}})}⋅\min\\{1, r(\mathcal K)\\}-1}+o(m^{-\frac{r}{3}}),
$$
then $ℙ(|\xi ^⊤\hat{w}(t+Δt) - g[\tilde{θ}(t+Δt)]|<\tildeε)\ge 1-2δ$.
On the other hand, for $g$ failed to satisfy the static dynamic model condition at error $\tildeε$ we have
$$
\begin{aligned}
&ℙ(\Vertθ^†(t) -\tilde{θ}(t)\Vert\le\Vertθ^*-\tilde{θ}(t)\Vert ),\\\\
= &ℙ(|g[θ^†(t)] - g[\tilde{θ}(t)] |\le|g(θ^*)-g[\tilde{θ}(t)]|)\\\\
&-ℙ(|g[θ^†(t)] - g[\tilde{θ}(t)] |\le|g(θ^*)-g[\tilde{θ}(t)]|\text{ and }\tilde{θ}(t)\in Q),\\\\
\ge &ℙ(|g[θ^†(t)] - g[\tilde{θ}(t)] |\le|g(θ^*)-g[\tilde{θ}(t)]|)-ℙ(\tilde{θ}(t)\in Q)\ge 1-δ^*,
\end{aligned}
$$
where $Q≜\\{θ|~|g(θ^*)-g(θ)|<\tildeε\\}$ and $δ^*≜2δ+\tildeδ+ℙ(\tilde{θ}(t)\in Q)$.
As a result, with a probability of $1-δ^*$, the error bound in Eq. (2) for the proposed method is lower than that of the conventional method. | Summary: This paper explores the challenge of modeling time series with NODEs. The authors propose a Koopman-driven framework named KoNODE that hierarchically encodes system dynamics through evolving ODE parameters and Koopman linear operators. Specifically, they introduce a three-level architecture—spanning observed state dynamics, parameter dynamics, and intrinsic Koopman linear dynamics—to disentangle surface-level behaviors from fundamental governing rules. Extensive experiments on synthetic and real-world datasets validate the effectiveness of the proposed approach.
## update after rebuttal
All my concerns have now been adequately addressed. Overall, this is a good work. However, considering the limited application scenarios, I will maintain my original score of "weak accept."
Claims And Evidence: N/A
Methods And Evaluation Criteria: N/A
Theoretical Claims: N/A
Experimental Designs Or Analyses: N/A
Supplementary Material: Some of them. C.3
Relation To Broader Scientific Literature: This work is beneficial for the development of AI for Science research.
Essential References Not Discussed: Pls refer to Weaknesses
Other Strengths And Weaknesses: **Advantages:**
1. The proposed KoNODE framework is both theoretically grounded and empirically effective in capturing intrinsic system dynamics.
2. The experiments are very detailed and thorough, clearly demonstrating the effectiveness and efficiency of the proposed method.
3. The manuscript is well-structured and clearly written, facilitating easy comprehension for readers.
**Weakness:**
1. From my opinion, the first contribution which is claimed as “uncovers the fundamental principles driving system evolution” appears somewhat overstated, as the identified deepest dynamics still lack sufficient interpretability.
2. This framework is very generic, but its primary focus on time-evolving ODE parameters is not yet a common paradigm. Could the author compare their settings with other generalizable NODEs [1][2][3]?
---
**Reference:**
[1] LEADS: Learning Dynamical Systems that Generalize Across Environments. NeurIPS, 2021.
[2] Generalizing to New Physical Systems via Context-Informed Dynamics Model. ICML, 2022.
[3] Generalizing Graph ODE for Learning Complex System Dynamics across Environments. KDD, 2023.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Line 178-179, the left half of the page. Could the author clarify why the assumption $A \equiv D$ makes sense?
2. In Section 3.3.2, Theorems 3.3 and 3.4 are formulated in Hilbert space rather than in the observation space of the data. Should $f$ and $h$ be maintained as characteristic functions to ensure these two theorems valid?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback.
**[W1. lack sufficient interpretability]**
We apologize for the unclear description of "revealing the fundamental driving forces of system evolution" and will clarify it in the Introduction and Experiment sections in the revision.
In our framework, the deepest dynamics, i.e., Koopman linear dynamics $dw/dt = Aw$, are obtained through Koopman operators. Fundamental evolution principles are revealed by the spectrum of the operators, which correspond element-wise to matrix $A$ and are given by $λ_j(K)=e^{α_jΔt}(\cosβ_jΔt±\rm i\it\sinβ_jΔt)$ in our framework. Specifically, the real part of the eigenvalues indicates the evolution speed, while the imaginary part corresponds to intrinsic frequencies and periodicity. Analyzing the elements of $A$ could help identify the system’s dominant driving modes and frequencies and offer insights into stability, periodicity, or other inherent characteristics, thus gaining a mathematically interpretable understanding of the system's evolution.
To further demonstrate this, we provide a more detailed analysis and interpretation of the learned dynamics in Sec. 5.2 by visualizing the spectrum of approximate Koopman operator for the Oscillator dataset in `spectrum_reviewers_Snqd_1pn6.pdf`, https://anonymous.4open.science/r/KoNODE-D8F2/. First, the magnitudes of spectrum are approximately one for both two dynamic systems, indicating boundary stability, where the state remains on a stable trajectory without diverging or converging. Second, the dominant spectrum of the nonlinear oscillator (on the left in the figure) has similar imaginary parts (i.e., the frequency components are the same), indicating that the system exhibits clear periodicity. Modes with the same imaginary part have the same periodic components.
---
**[W2. comparison to generalizable NODEs]**
We have added the comparison between our framework with other generalizable NODEs, LEADS [1], and CoDA [2] in `supplementary_table_reviewers_Snqd_waH1.pdf`, https://anonymous.4open.science/r/KoNODE-D8F2/. Due to the lack of publicly available code for [3], we did not compare with it.
---
**[Q1. A≡D]**
As stated in lines 174-178 on the left, $\frac{dg(θ)}{dt}=Ag(θ)\iff\frac{d\tilde{g}(θ)}{dt}=PAP^{-1}\tilde{g}(θ)=D\tilde{g}(θ)$ hence we can assume $A\equiv D$ by modeling $\tilde{g}$ instead of $g$. We will clarify the definition of $P$ to avoid confusion. The assumption $A≡D$ is not only mathematically rigorous but, as addressed in [W1. lack sufficient interpretability], allows us to analyze the system's behavior directly by observing the elements of matrix $A$.
---
**[Q2. Hilbert space]**
The theorems are formulated in Hilbert spaces for generality, and this is a very mild assumption. The observation space of the data naturally satisfies this, as we only require the data space to be complete and have an inner product. The two functions are not necessarily characteristic functions. As clarified in line 156 on the right, function h is maintained as the inverse of the characteristic function, while f is the differential function in the NODE framework shown in Eq. (2). | Summary: This paper proposes KoNODE, a hierarchical framework that integrates Koopman operators into Neural Ordinary Differential Equations (NODEs) to learn time-evolving parameters.The authors provide theoretical error bounds for the finite-dimensional approximation of the Koopman operator and show how the proposed method improves long-horizon prediction and generalization on both synthetic oscillators and real-world time series tasks.
## update after rebuttal
Given the fact, that authors addresed all the raised question and provided a new theorem with the proof, I raise the recommendation to Accept (4).
Claims And Evidence: The paper states that modeling time-evolving ODE parameters via Koopman operators enables a deeper representation of underlying system dynamics, improves long-horizon forecasting, and generalizes better to unseen conditions. It has both theoretical evidence with finite-dimensional Koopman approximation error bound and experimental evidence with superior performance on the synthetic and real world data.
Methods And Evaluation Criteria: The authors conducted several experiments on synthetic and real-world data. They selected a wide range of methods including classic Neural ODEs as well as the models with evolving parameters (ANODEV2), that makes the comparison reasonable.
Theoretical Claims: There are several theoretical claims on the error bounds. Proofs, that are provided in the appendix, look reasonable, however, they may be a bit hard to follow, specifically with these equal signs with stars and asterisks, it is better to avoid such “nonlinearities” and be more consequent.
Experimental Designs Or Analyses: I reviewed both the synthetic and real-world experiments in detail, and the overall design and selection of baselines are sound. There are two minor concerns. The first one relates to the potential computational overhead introduced by the added Koopman operator layer. While the appendix does include some runtime analysis, it would be helpful to include a concise overview in the main text to clarify any changes in memory or runtime requirements. The second one is that while the authors provide some insight into Koopman operator dimensionality in synthetic settings, it remains unclear how these dimensional choices generalize to real-world scenarios or whether any systematic criterion (e.g., data-driven rank selection) could be used. Addressing these points would further strengthen the paper’s clarity and applicability. Aside from that, the experimental setup appears valid, with no major issues that would detract from the paper’s conclusions.
Supplementary Material: Yes, I have reviewed the proofs briefly and the ablation study part.
Relation To Broader Scientific Literature: The paper builds on Neural ODE literature (Chen et al. 2018) and prior Koopman approaches (Lusch et al. 2018, among others). It extends ANODEV2 by modeling theta through a linear Koopman system rather than a general ODE, thus adding interpretability and improved long-horizon stability.
Essential References Not Discussed: n/a
Other Strengths And Weaknesses: The paper’s primary strengths lie in its novel integration of Koopman operators with Neural ODEs, supported by comprehensive theoretical analysis and strong empirical results across both synthetic and real-world datasets. Meanwhile, potential weaknesses include limited discussion of computational overhead in the main text (despite some runtime data in the appendix), and a need for deeper examination of how the chosen Koopman dimensionality affects performance in real-world scenarios and whether a systematic selection method could be applied.
Other Comments Or Suggestions: n/a
Questions For Authors: 1. How does KoNODE handle irregularly sampled data?
2. Is there a systematic way to select the dimension of the Koopman space m? Can any data-driven rank estimation be integrated?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback.
**[W1. changes in memory or runtime]**
We will move the runtime analysis from the appendix to the main text to show our advantage in convergence rate and add a discussion on memory complexity. Regarding memory, we clarify that the memory of the proposed framework is $O(\max\\{n, m, h\\}⋅D)$ compared to $O(nD)$ of NODE for $n$, $m$, $h$ being the dimensions of $x(t)$, $θ(t)$, and the hidden layer respectively and $D$ the model size for the differential function $f$.
---
**[W2 & Q2. selection of Koopman dimension]**
Thank you for the concern about the choice of the Koopman dimension $m$. We provide theoretical lower and upper bounds for $m$. Although other factors may influence the best picks, the bounds provide references for the systematic criterion.
Specifically, Thm. R.1 shows that $m\le r$ for the rank of trajectory space $r$ and Thm. R.4 shows that $m$ has a tight lower bound $⌈\frac{D}{D-r}⌉$ for second-order derivable $x(t)$, where $D$ is the dimension of $Θ$ (the model complexity of $f$) and $r$ is the rank of data. Note that the lower bound is commonly small for a network that is strong enough, yet the hyper-parameter may not be static in practice due to (1) the auxiliary dimensions caused by the designed form of the Koopman matrix, (2) the disturbance caused by the multiple choice of the dynamic $φ(θ(t), t)$ as $D\gg n$, and (3) the attempt to more accurate dynamic regression by multiple $\mathfrak F_t$ sets. The theorems are given as follows.
**Theorem R.1 (The Theoretical Upper-Bound of $m$)**
Inequality $m\le r$ holds if:
(1) The data trajectories are approximately in an $r$-dimensional manifold $\mathcal M$, i.e.,
$$
∀ε>0,∃~δ> 0,ℙ_{x(t)\sim\text{data}}[\Vert x(t)-x^\perp(t)\Vert <ε] > 1-δ,
$$
where $x^\perp≜\underset{y\in\mathcal M}{\text{argmin}}\Vert x-y\Vert $ is the projection of $x$ to the manifold.
(2) Function $f$ satisfies the Lipchitz condition and is differentiable.
*Proof:* Since $\mathcal M$ admits a bijective chart $ψ:\mathcal M\toℝ^r$, the map $\phi_t(θ(t)) = \frac{dψ(x^\perp(t))}{dx^\perp(t)}f(x^\perp(t), t;θ(t))$ is bijective. This, together with the uniqueness of $x(t)$ (by Picard–Lindelöf), implies that $θ(t)$ lies on an $r$-dimensional manifold. Hence, $m\le r$.
**Definition R.2 (Frontier Manifold)**
For a dynamic in Euclidean space, a fixed time $t_0$ and origin $θ^*$, the frontier manifold $\mathfrak F_t$ is the quotient space of the equivalent class decided by the trajectories, i.e., $\mathfrak F_0$ satisfies (1) $θ^*\in\mathfrak F_0$, and (2) the normal vector at $θ\in\mathfrak F_0$ is $φ(θ, t_0)$. Then, we define $\mathfrak F_t$ as the set of evolved $θ(t)$ with $θ(t_0)\in\mathfrak F_0$.
**Lemma R.3**
For $θ_0\in\mathfrak F_s$, if the trajectory $θ(t)$ follows the dynamic
$$
\frac{dθ(t)}{dt} =φ(θ(t), t),θ(t_0) =θ_0,
$$
a scalar observable function $g(θ)≜𝓒⋅\exp(\frac{λ}{D}\int\text{inv}[φ(θ, t_s(θ))]^⊤dθ)$ maps it into a Koopman space.
*Proof:* Note that for $θ(t_0) =θ_0\in\mathfrak F_s$, $θ(t)$ is uniquely identified by the Picard-Lindelöf Theorem. Moreover, all elements in a frontier manifold $\mathfrak F_s$ are on different trajectories, which creates a bijection between $(θ_0, t)$ and $θ(t)$. Let $t_s(θ)$ be the mapping from $θ$ to time under the condition that $θ_0\in\mathfrak F_s$.
Then, let $g(θ)$ be a scalar observable function,
$$
g(θ)≜𝓒⋅σ^{-1}\text{ where }σ=\exp\left(-\frac{λ}{D}\int\text{inv}[φ(θ, t_s(θ))]^⊤dθ\right),
$$
where $λ$ and $𝓒$ are constants and $\text{inv}[⋅]$ is the element-wise inverse of a vector. Therefore,
$$
\begin{aligned}
\frac{dg(θ)}{dt} - k⋅g(θ) =&∇_{θ}g(θ)^⊤φ(θ, t) - k⋅g(θ)⋅φ^⊤(θ, t)⋅\text{inv}[φ(θ, t)]/D,\\\\
=&φ^⊤(θ, t)⋅[∇_{θ}g(θ) -λ/D⋅g(θ)⋅\text{inv}[φ(θ, t)]], \\\\
=&\frac{1}{σ}∇_{θ}[g(θ)⋅σ]= 0.
\end{aligned}
$$
Hence, $g(θ)$ lies in the Koopman space.
**Theorem R.4 (The Theoretical Lower-Bound of $m$)**
To model the trajectory of $x(t)$ if it is second-order derivable, the dimension of $w(t)$ is ideally $⌈\frac{D}{D-r}⌉$ where $r$ is the rank of data.
*Proof:* For a differentiable $f$, the dynamic $\frac{dθ(t)}{dt} =φ(θ(t), t)$ best models the trajectory $x(t)$ where
$$
φ(θ(t), t)≜\left[\frac{∂f}{∂θ(t)}\right]^{†}\left[\frac{d^2x(t)}{dt^2} -\frac{∂f}{∂t} -\frac{∂f}{∂x(t)}f(x(t), t;θ(t))\right],\tag{1}
$$
and $A^†$ being the pseudo-inverse of the Jacobian matrix.
Lamma R.3 infers that for $θ_0\in\mathfrak F_s$, only one dimension of $w$ is needed. Thm. R.1 indicates that the gradient $φ(θ(t), t)$ lies in a space with the highest rank of $r$, thus the frontier manifold $\mathfrak F_s$ covers a dimension of at least $D-r$. To ensure a full cover of $Θ$-space in the initialization, an ideal dimension of $w$ is $⌈\frac{D}{D-r}⌉$.
---
**[Q1. irregularly sampled data]**
NODE-based methods model the system continuously, allowing them to handle irregularly sampled data by interpolating for intermediate time points. | null | null | null | null | null | null |
Structure-informed Risk Minimization for Robust Ensemble Learning | Accept (poster) | Summary: This paper introduces a novel framework to learn ensemble weights to improved out-of-distribution (OOD) robustness. The key idea is to incorporate structure relationships between training distributions to build a realistic uncertainty set. The authors proposed a computationally efficient optimization algorithm with theoretical guarantees. Empirically, the proposed method consistently outperforms existing ensemble methods across diverse benchmarks including DomainBed and WILDS, demonstrating the superior OOD generalization capability.
Claims And Evidence: The claim that the proposed method (SRM) balances worst-case robustness with average performance is supported by theoretical guarantees in Section 4 and empirical results showing consistent improvements over both ERM and DRO approaches.
The effectiveness of structure-informed uncertainty sets is validated by ablation studies investigating different graph construction methods, centrality measures, and regularization strengths.
Performance improvements over existing methods are demonstrated consistently across multiple datasets and evaluation settings.
Methods And Evaluation Criteria: The idea of modeling relationships between training distributions to build a realistic uncertainty set is intuitive and provides a principled way to address the overly-pessimistic issue of DRO.
Evaluation on common benchmarks follows standard practices in the field.
The experiments of various train/test split scenarios on temporal distribution shifts and corruption tests further strengthens the evaluation by considering diverse distribution shifts.
Theoretical Claims: The proofs in the appendix appear sound. The density-centrality relationship proof correctly decomposes distances and establishes bounds based on the distribution space properties. The generalization bound proof properly utilizes the mixture distribution approximation and applies concentration inequalities. However, the proof on convergence analysis seems missing, but the main conclusion seems correct.
Experimental Designs Or Analyses: Experiments follow standard practices for OOD evaluation where multiple environments are used for training and average performance across test environments is reported.
The comparison against multiple baselines is fair and comprehensive.
Ablation studies thoroughly examine the impact of key components.
Personally, I really like the experiment design of various train-test split scenarios on temporal distribution shifts which clearly shows the overly-conservative issue of Group DRO.
Supplementary Material: The supplementary material mainly includes the code. The code seems well-organized although I didn’t run it.
Relation To Broader Scientific Literature: The discussion on ensemble learning for OOD generalization is comprehensive and get to the point. When test data is unavailable, how to combine models for improved OOD robustness remains a seldom investigated problem. The common methods are uniform (equally aggregate all models) and greedy (select models based on accuracy on an in-distribution validation set). However, neither of them is a promising solution under distributional shifts. DRO is a promising technique to improve robustness but suffers from overly-pessimistic issues. The novelty of this work lies in bridging the two areas by applying DRO to ensemble weight optimization while incorporating structural information to mitigate over-pessimism.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Pros:
Utilizing the structural information to build a realistic uncertainty set is novel and well-motivated. And this paper presents a compelling unified framework that encompasses existing approaches ERM and DRO as special cases through the constraint in Equation 7.
The paper is very well-written and easy to follow. The discussion on alternative methods and connection to prior works shows the authors’ in-depth understanding.
The visualizations of distributional graphs clearly illustrate how SRM assigns higher weights to influential distributions compared to DRO's focus on worst-case distributions. And the experiment on multiple train/test splits under temporal distribution shifts is a clever and insightful design.
The approach is theoretically sound while remaining computationally tractable.
Cons:
The effectiveness of SRM might be limited in scenarios with few training distributions, as the graph structure would be less informative.
Although the discussion of alternative priors (Section 3.1) is excellent, the details of Laplacian-based prior is missing. I understand Laplacian based method is widely used in semi-supervised learning and few-shot learning tasks. It would be more beneficial if the author can provide more implementation details.
Other Comments Or Suggestions: Typos:
1. In Line 122, it should be G=(V, A).
2. In Definition 3.1, it should be c: P_e -> R^+.
Questions For Authors: SRM requires multiple training distributions. What if there is only one training distribution or distribution ID is unknown?
How does SRM perform when the number of training distributions is very small (2-3)? Does the graph structure still provide meaningful guidance in such scenarios?
Ethical Review Concerns: n/a
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their thoughtful and positive feedback. We appreciate your recognition of our paper as "very well-written and easy to follow" and that our approach is "theoretically sound while remaining computationally tractable." Your comments about the "compelling unified framework" and the clarity of our visualizations are particularly encouraging. Below, we address your specific questions:
**1. Proof on Convergence Analysis**.
Our convergence analysis leverages Assumption 4.3 (Lipschitz continuity and smoothness) to track gradient norm decreases across iterations. The key insight is that the regularization term $\lambda D(q\|\|p)$ creates a more strongly convex objective function, accelerating convergence proportionally to $\lambda$. With step sizes $\eta_w = \eta_q = 1/\sqrt{T}$, we apply standard convex optimization techniques to bound the expected squared gradient norm after $T$ iterations by $C/(1+\lambda)\sqrt{T}$, where $C$ depends on $\beta$ and $L^2$. This explains SRM's faster convergence compared to standard DRO ($\lambda = 0$), demonstrating how our structure-informed approach improves both robustness and computational efficiency. The detailed proof will be included in the revised manuscript.
**2. Laplacian-based Prior Implementation**
For the Laplacian-based prior, we construct the graph Laplacian matrix $L=D-A$ where $D$ is the degree matrix (diagonal with $D_{ii} = \sum_j A_{ij}$) and $A$ is our adjacency matrix. The optimization constraint becomes $q^T L q \leq \tau$, penalizing weight differences between connected distributions. We solve this via a similar Lagrangian approach as in our main method. While this enforces local smoothness, our centrality-based prior better captures global influence within the distribution manifold, explaining its superior performance in identifying representative distributions.
**3. What if there is only one training distribution or distribution ID is unknown?**
Recent methods [1-2] have been proposed to infer distribution IDs from data. Our approach is orthogonal to these methods and can be built on top of them to handle scenarios where distribution IDs are unknown.
**4. How does SRM perform when the number of training distributions is very small (2-3)?**
SRM requires at least three training distributions to build a meaningful graph. In our PACS experiment (using "Art," "Sketch," and "Photo" as training with "Cartoon" as test), our graph analysis identified "Art" as the most central domain. We hypothesize this occurs because "Art" inherently combines photographic elements with artistic styles, making it informative for generalizing to "Cartoon." Even with just three distributions, SRM successfully identifies influential distributions through structural analysis.
All noted typos will be corrected in the revised manuscript. Please let us know if you have any further questions or suggestions. We would be happy to elaborate.
References:
[1] Creager et al. "Environment inference for invariant learning." ICML 2021.
[2] Liu et al. "Just train twice: Improving group robustness without training group information." ICML 2021.
---
Rebuttal Comment 1.1:
Comment: All my concerns have been addressed. I will rasie the score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thorough review and positive feedback. We are happy that our responses have addressed all your concerns and appreciate your decision to raise the score.
We will include the requested details on convergence analysis, Laplacian-based prior implementation, and our approach's effectiveness with limited training distributions. All noted typos will be corrected in the final manuscript.
Thank you again for your valuable insights that have helped strengthen our paper. | Summary: This paper presents SRM, a method to improve how ensemble models handle unseen data changes by leveraging the relationships between training data distributions. SRM builds a network of these distributions, measuring their similarities with a simplified distance metric. It prioritizes "central" distributions that are most representative of the overall data structure. Using an efficient optimization process, SRM balances worst-case robustness and average performance. Tests on standard benchmarks show SRM outperforms existing methods like DRO and ERM in adapting to new environments. The approach is backed by theory explaining why focusing on structural relationships improves reliability and convergence speed.
Claims And Evidence: The paper’s claims are partially supported by evidence. Key strengths include empirical validation on benchmarks, where SRM shows modest but consistent improvements over DRO and ERM, and ablation studies confirming the value of structural priors like closeness centrality. However, claims about computational efficiency lack direct timing comparisons, and theoretical assumptions remain untested for extreme distribution shifts. While the core idea of leveraging distributional graphs is novel and empirically validated, gaps in validating efficiency, base learner diversity, and extreme OOD performance weaken the overall support for the method’s broad applicability.
Methods And Evaluation Criteria: The proposed methods in SRM are reasonably designed to address the challenges of robust ensemble learning under distribution shifts. By leveraging structural relationships and centrality priors, SRM mitigates the over-conservatism of traditional DRO and better models real-world distribution shifts. The Gaussian Wasserstein approximation is practical for image data, reducing computational complexity, while the alternating gradient optimization ensures valid constraints. The evaluation on benchmarks like DomainBed and WILDS covers key OOD scenarios effectively, but the reliance on Gaussian assumptions limits its applicability to non-Gaussian data. The absence of extreme OOD tests and runtime efficiency comparisons leaves room for broader validation to strengthen the method's claims. Overall, the methods and evaluation criteria are sensible within the scope of the paper but require expanded testing for wider applicability.
Theoretical Claims: The paper’s theory is mathematically solid and explains how structural priors ( distribution graphs) improve robustness. It works well for gradual shifts but probably has limits in extreme real-world OOD cases (totally new domains or adversarial attacks). The key issue is its reliance on assuming test data can be approximated by training mixtures-a condition rarely met in unpredictable scenarios. Without validation for these extremes, the theory’s practical guarantees remain unproven.
Experimental Designs Or Analyses: The experimental design effectively validates SRM’s performance on standard benchmarks like DomainBed and WILDS, demonstrating strengths in methodological rigor ( ablation studies on centrality metrics) and benchmark relevance. However, it exhibits critical limitations: 1) no comparison with individual base learners precludes distinguishing whether gains stem from structural optimization or ensemble diversity; 2) omission of extreme OOD scenarios (novel domains, adversarial attacks) undermines claims about robustness boundaries; and 3) efficiency claims lack runtime validation against exact methods (Sinkhorn for EMD). While the design supports controlled efficacy, addressing these gaps is essential to establishing SRM’s scalability and real-world applicability.
Supplementary Material: The supplementary material (Appendices A and B) provides additional experimental details and theoretical proofs but fails to address critical gaps in the main text. Appendix A clarifies implementation choices (ResNet-50 models, hyperparameter tuning for λ) and confirms graph construction via Wasserstein distance. However, it still lacks single base learner comparisons (critical for isolating SRM’s structural contributions) and extreme OOD experiment. Appendix B formalizes proofs for Theorems 4.2-4.6, strengthening theoretical rigor but remaining abstracted from real-world distribution shifts. While the supplements improve transparency, they do not resolve key concerns about practical robustness or scalability, leaving the method’s limitations (reliance on mixture assumptions) unaddressed.
Relation To Broader Scientific Literature: SRM’s contributions sit at the intersection of robust optimization, ensemble learning, and distributional geometry. By integrating structural priors into distributional robustness, it advances the state-of-the-art in OOD generalization. However, its practical validity hinges on assumptions (test distributions as training mixtures) that diverge from extreme OOD scenarios documented in literature (cross-modal shifts, adversarial attacks). Future work could bridge this gap by incorporating meta-learning or non-parametric distribution modeling.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The paper presents a creative fusion of graph theory, distributional robustness, and ensemble learning, offering a novel framework (SRM) that addresses key limitations of traditional methods like DRO and deep ensembles. Its practical relevance to real-world OOD scenarios ( climate change, healthcare) is underscored by strong empirical results on benchmarks like DomainBed and WILDS, and efficiency gains from Gaussian-Wasserstein approximations enhance scalability. However, the work is constrained by overreliance on untested assumptions ( test distributions as training mixtures), gaps in extreme OOD validation ( adversarial shifts), and ambiguity in theoretical contributions ( geometric assumptions on distribution manifolds). While the integration of structural priors and robust optimization is conceptually compelling, the method’s scalability to complex, nonlinear distribution shifts remains uncertain without broader experimental validation and clearer justifications for its theoretical premises.
Other Comments Or Suggestions: 1. Notation Consistency: Ensure consistent notation for key variables (e.g., clarify whether \lambda refers to regularization strength or another parameter across sections). Define all symbols in equations (e.g., d for dimensionality) when first introduced.
2. Experimental Details: Specify the number of training/validation splits used for hyperparameter tuning (e.g., how many \lambda values were tested). Report computational runtime for SRM vs. baselines (even if approximate) to contextualize efficiency claims.
Questions For Authors: Q1. Theoretical Claims: Relies on assumptions with weak generalizability (test distributions as training mixtures) and geometric/geometric assumptions (e.g., manifold structure) that lack validation for non-Gaussian/non-manifold data.
Q2. Experimental Design: Omits critical validations: comparisons with single base learners, extreme OOD scenarios (e.g., adversarial shifts), and runtime efficiency benchmarks.
Q3. Methods: Over-reliant on Gaussian approximations (limiting non-Gaussian scalability) and ignores the impact of graph pruning on robustness.
Q4. Clarity: Lack of clear and unambiguous definitions of all symbols.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thanks for acknowledging the novelty of our work. Below, we address the main points raised:
1. Assumption of Our Paper and Extreme OOD Scenarios
In Equation (14), we assume that the test distribution lies within a bounded divergence from a mixture of training distributions. This assumption is aligned with a well-established foundation in learning under distribution shift, as formalized by Ben-David et al. [1]. Specifically, Theorem 2 in [1] shows that the risk on the test distribution is bounded by the empirical risk on the source domain(s) plus a divergence term (e.g., H-divergence) between the source and target distributions. When this divergence becomes too large, no method can guarantee meaningful generalization bounds.
Our method is designed for natural distribution shifts and evaluated on two challenging and realistic benchmarks: DomainBed and WILDS, both of which are widely adopted in the OOD generalization community. We acknowledge that adversarial robustness is an important but orthogonal research direction and is outside the scope of this work.
2. Comparison with Individual Base Learners
To further isolate the benefit of structure-informed weighting, we compare SRM with both individual models and a uniform ensemble baseline. The table below reports results on the DomainNet dataset across six test domains. SRM significantly outperforms both the average individual performance and uniform ensemble. For example, on the “Clipart” domain, accuracy improves from 56.79% (mean of individual models) to 58.75% (uniform ensemble) and then to 63.02% with SRM, showing that leveraging structural relationships provides substantial gains.
| test domain | min | max | mean | std | Uniform | Ours |
| -------- | ----- | ----- | ----- | ---- | ------- | ----- |
| Clipart | 52.74 | 59.51 | 56.79 | 2.39 | 58.75 | 63.02 |
| Infograph | 18.19 | 20.12 | 18.98 | 0.61 | 21.30 | 21.77 |
| Painting | 44.99 | 48.36 | 46.57 | 1.09 | 51.07 | 51.92 |
| Quickdraw | 10.98 | 13.51 | 12.25 | 0.75 | 14.11 | 14.96 |
| Real-world | 57.31 | 61.30 | 59.67 | 1.09 | 62.20 | 64.51 |
| Sketch | 46.76 | 51.12 | 49.11 | 1.21 | 52.31 | 54.66 |
These results confirm that our method’s improvement is not simply due to ensemble averaging, but stems from meaningful structural modeling.
3. Runtime Validation Against Exact Methods
We provide a runtime comparison of different distance metrics used for constructing the distributional graph, using the PACS dataset (3 domains, 1000 samples, 2048-dimensional features). As shown below, our Gaussian-approximated 2-Wasserstein distance achieves the best trade-off between accuracy and computational efficiency.
| Distance | Running Time | Worst-region Acc. (%) | Time Complexity |
| ------------- | ------------ |------------------------|---------------------|
| 2-Wasserstein | 28.05s | 38.10 | O(nd² + d³) |
| Diffusion EMD | 2.56s | 37.99 | O(nd) |
| EMD | 516.41s | 38.06 | O(n³d²) |
While Diffusion EMD is faster, it sacrifices accuracy. Exact EMD is computationally infeasible for large-scale datasets. Our method balances these trade-offs effectively, enabling practical deployment on real-world benchmarks.
We hope this addresses your concerns. Please let us know if you have any further questions or suggestions. We would be happy to elaborate.
Reference
[1] Ben-David, Shai, et al. "A theory of learning from different domains." Machine Learning 79.1-2 (2010): 151–175. | Summary: This paper proposes a framework for learning robust ensemble weights without requiring access to test data. It aims to mitigate the over-pessimism of Distributionally Robust Optimization (DRO) by focusing the uncertainty set on more plausible structures. The idea is solid, and the proposed algorithm is computationally feasible. However, the experimental results do not provide enough evidence to convincingly support the claimed superiority of the method.
Claims And Evidence: 1. Reasonable assumption and practical setting: The paper tackles the problem of learning robust ensemble weights that generalize well to unseen test distributions. The method performs effectively under a relatively mild assumption that the test performance can be approximated by a mixture of training domains.
2. Good solution to the problem: This paper overcome the over-pessimism of DRO by not weighting too much on distant distribution and focusing more on centralized training distributions.
Methods And Evaluation Criteria: - Robust across different centrality metrics and distance metrics: Ablation study illustrates the framework is conceptually valid, and can be replaced with different centrality metrics and distance metrics while maintaining good performance.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: 1. Lack of comparison with closely related work: It would strengthen the paper to compare the proposed method with other related approaches aimed at addressing over-pessimism in DRO, such as [1,2,3], as well as other OOD generalization methods (see baselines in DomainBed).
2. Statistical significance of results: In table 1, I notice that all of your baselines have **very very similar test accuracy** on many of the datasets, which seems implausible. Also, for some lines in table 1 and table 2, the improvement achieved by your method over other baselines is not statistically significant.
3. Performance in extrapolation settings: In table 2, you claim your method outperforms other baselines in both distribution interpolation and distribution extrapolation settings. As your method learns an optimal mixture of training distributions constrained by prior p, can you explain why it achieves good performance in distribution extrapolation settings?
4. Also, I would like to see some results on tabular datasets compared with some well-known ensemble methods like XGBoost, Light GBM, CatBoost, etc.
[1] DORO: Distributional and Outlier Robust Optimization. Runtian Zhai, Chen Dan, Zico Kolter, Pradeep Ravikumar
[2] Geometry-Calibrated DRO: Combating Over-Pessimism with Free Energy Implications. Jiashuo Liu, Jiayun Wu, Tianyu Wang, Hao Zou, Bo Li, Peng Cui
[3] Boosted CVaR Classification. Runtian Zhai, Chen Dan, Arun Sai Suggala, Zico Kolter, Pradeep Ravikumar
Supplementary Material: I did not review the codes in the Supplementary Material.
Relation To Broader Scientific Literature: This paper presents a framework for learning robust ensemble weights without requiring access to test data. It aims to mitigate the over-pessimism of Distributionally Robust Optimization (DRO) by focusing the uncertainty set on more plausible structures. The idea is solid, and the proposed algorithm is computationally feasible.
Essential References Not Discussed: [1] DORO: Distributional and Outlier Robust Optimization. Runtian Zhai, Chen Dan, Zico Kolter, Pradeep Ravikumar
[2] Geometry-Calibrated DRO: Combating Over-Pessimism with Free Energy Implications. Jiashuo Liu, Jiayun Wu, Tianyu Wang, Hao Zou, Bo Li, Peng Cui
[3] Boosted CVaR Classification. Runtian Zhai, Chen Dan, Arun Sai Suggala, Zico Kolter, Pradeep Ravikumar
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback. We address each concern below.
**1. Comparison with [1-3] and Other OOD Generalization Methods**
While SRM and the methods in [1–3] all aim to reduce the pessimism often observed in DRO, they approach the problem from different directions:
- [1–3] primarily focus on data-point-level robustness, targeting noisy samples or outliers.
- SRM, by contrast, operates at the distribution level by identifying influential training distributions via centrality in a distribution graph.
This makes SRM particularly suitable for domain generalization scenarios where relationships among distributions are more informative than individual noisy points. We also note that GCDRO [2] builds a data graph (not a distribution graph) and is mainly designed for regression tasks, which further differentiates it from our setting. We will add a discussion of these works and clarify the distinction in the revised version.
Regarding baselines in DomainBed, our goal is not to introduce a new training algorithm for OOD generalization. Instead, SRM is a model-agnostic framework designed to learn ensemble weights from pretrained models, which can be drawn from any existing training algorithm or model zoo.
**2. Statistical Significance of Results**
We appreciate your concern about the similarity in test accuracies across baselines. This is primarily due to our use of DiWA, which requires all models in the ensemble pool to be initialized identically so they can be merged for efficient inference. This likely reduces diversity among models.
To address this, and following Reviewer Qxor’s suggestion, we constructed a more diverse model pool using different training algorithms (ERM, Mixup, CORAL). The table below shows results on TerraIncognita:
| Algorithm | L100 | L38 | L43 | L46 | Avg |
|--------------|------|------|------|------|------|
| Uniform | 50.4 | 42.6 | **62.0** | 38.7 | 48.4 |
| ERM | 51.3 | 43.7 | 61.4 | 39.5 | 49.0 |
| Group DRO | 50.9 | 43.2 | 61.8 | 39.2 | 48.8 |
| SRM | **52.1** | **44.3** | 61.5 | **40.0** | **49.5** |
The strong result of Uniform on L43 may stem from complementary patterns captured by the diverse models. Still, SRM consistently provides better overall performance. We will include this in the revised paper. All code used is provided in the supplementary materials.
**3. Why SRM Performs Well in Distribution Extrapolation Settings**
We appreciate the reviewer’s interest in our method’s performance under distribution extrapolation. In the FMoW-WILDS benchmark, each year from 2002 to 2017 is treated as a distinct distribution. The distribution extrapolation setting evaluates generalization to test years that lie outside the range of training years:
- Test Before 2004: training on 2007–2018, validation on 2004–2007, testing on 2002–2004
- Test After 2016: training on 2002–2013, validation on 2014–2016, testing on 2016–2017
In “Test Before 2004”, the test years are temporally closest to 2007, and Group DRO focuses its optimization almost entirely on this single year (the worst-case distribution under its framework, shown in Figure 1). However, this strategy ignores other potentially useful training distributions, which can limit generalization. This behavior is also supported by theory: the solution to Group DRO's linear program tends to lie at a vertex of the feasible region.
Our method achieves better performance by balancing the worst-case distribution with structurally influential distributions, as identified through centrality in the distribution graph. Rather than collapsing onto a single training domain, SRM spreads attention across those distributions that are both risky and influential, yielding improved generalization even in extrapolation scenarios.
This aligns with the insights from Dual Risk Minimization and Quantile Risk Minimization (references from Reviewer Qxor), which also highlight that strong OOD generalization emerges from a trade-off between average-case and worst-case risk, rather than optimizing either in isolation.
**4. Experiments on Tabular Datasets and Comparison with Boosting Methods**
We appreciate the suggestion to compare SRM with XGBoost, LightGBM, and CatBoost. However, there is an important distinction:
- Boosting methods build ensembles by training weak learners sequentially, whereas
- SRM assumes a pool of already trained models and focuses on learning the optimal weighting scheme for combining them under distribution shift.
That said, we agree that applying SRM to tabular datasets is worthwhile. We plan to include experiments on the eICU dataset, which includes electronic health records (EHRs) from 208 hospitals in the U.S. Due to time constraints, we may not include results in the rebuttal but will provide them in the final paper.
Thank you again for your valuable feedback. Please let us know if you have further questions. We would be happy to clarify or expand on any point. | Summary: This work proposes structure-informed risk minimization (SRM), which can be seen as a modification to the Group DRO algorithm, and applies it to robust ensemble learning.
More specifically, SRM optimizes the ensemble weight of multiple fixed pre-trained models to reduce the ensemble’s risk in the worst mixture of training domains that is not too far from the “center” of training domains.
The “center” is also a mixture of training domains whose mixing weights are determined by the distance from one domain to other domains.
In this way, SRM places more optimization pressure on domains that are closer to all the other domains, unlike Group DRO which assigns weights only based on the risk of a domain.
Empirical results show that SRM outperforms several baselines on various OOD generalization benchmarks.
## Update after rebuttal
After carefully reading through the authors' response and other reviewers' comment, I decide to maintain my position (weak reject) for this paper.
I agree with Reviewer mDcw that the paper's main idea (i.e., centrality metric) is novel and the theoretical results are solid.
Moreover, the paper is well written and easy to follow.
My concern, however, lies in the theoretical and empirical significance of the paper.
The paper focuses on a relatively specific (one might say narrow) problem, namely robust ensemble learning, that aims to learn good mixing weights (with a linear layer) for the predictions of different pre-trained models.
While focusing on this particular problem is fine, the proposed method, SRM, and the theoretical results are only loosely related to the problem: the "ensemble" part of the problem seems to be largely irrelevant.
In this respect, I feel that SRM only addresses a small issue of (robust) ensemble learning.
Another issue of the paper is the gap between the stated goal and the actual realization of SRM.
More specifically, the goal is to balance robustness and *average* performance, but the proposed method optimizes for the worst-domain performance in the vicinity of the *central* domain instead of simply the *average* domain.
No clear reason is given for why the more complicated approach is opted for.
Under the current setting, I fail to see any meaningful difference between SRM and the simpler approach using the average domain.
Empirically, SRM only marginally improves the baselines, validating my concerns above. I think the paper would greatly benefit from a better setting or scenario to demonstrate the distinct properties and effectiveness of SRM. I sincerely hope that the authors not to be discouraged if this paper is rejected because the idea of domain centrality is really interesting and I believe it will probably shine under a slightly different light.
Claims And Evidence: The authors claim that the proposed method achieves superior OoD generalization compared to existing ensemble combination strategies across diverse benchmarks. However, as shown in Tables 1 and 2, the empirical improvement over ERM is marginal (66.23 $\to$ 66.54 on DomainBed, and 51.45 $\to$ 51.57 on FMoW-WILDS). Moreover, if SRM does, as the authors claim, provide a more realistic approximation of potential test distributions, then it should be better demonstrated under more general settings than ensemble learning. For example, can SRM improve the OOD performance of individual models by updating their parameters?
Methods And Evaluation Criteria: Both the methods and the evaluation criteria make sense for the problem. The only thing that I would like to point out is that the proposed structure-informed risk minimization (SRM) is only loosely connected with ensemble learning. SRM feels like a more general principle for OOD generalization where ensemble learning is just a very narrow use case. It is unclear what consideration in SRM is specifically for ensemble learning for it to be effective.
Theoretical Claims: Proofs are not carefully checked, but the theoretical claims look sound to me.
Experimental Designs Or Analyses: The experiments considered a range of baselines (both non-optimization and optimization-based ones) on common benchmarks of OOD generalization (DomainBed and WILDS). The model pool for the ensemble consists of ResNet-50 models trained with ERM under different hyperparameter settings. This is fine, but I would suggest the authors also consider some more diverse pools of models, e.g., models trained with different algorithms and/or on different datasets.
Supplementary Material: Yes, I checked the implementation details and the additional results on DomainBed.
Relation To Broader Scientific Literature: I think SRM might be useful for OOD generalization in some more general settings. A lot of previous work on OOD generalization focuses on worst-case risk minimization, e.g., IRM, Group DRO, and VREx, which tend to overemphasize the importance of the worst-case domain. SRM, on the other hand, additionally considers the centrality of a domain when weighing its importance.
Essential References Not Discussed: The discussion on related work is a bit lacking. A closely related work, dual risk minimization [1], also proposes to combine average risk minimization with worst-case risk minimization to mitigate the latter's over-conservativeness but does not assume multiple training distributions. Another related work is quantile risk minimization [2], which focuses on high probability domain generalization and is not mentioned in the paper either. The motivation of these papers is quite similar to the reviewed paper, although the contexts are slightly different.
[1] Li et al. "Dual Risk Minimization: Towards Next-Level Robustness in Fine-tuning Zero-Shot Models." NeurIPS 2024.
[2] Eastwood et al. "Probable domain generalization via quantile risk minimization." NeurIPS 2022.
Other Strengths And Weaknesses: - The centrality of a domain is an interesting and novel concept (at least to me) in the context of OOD generalization. It plays a central role in the proposed method, SRM, but I don't think it's sufficiently discussed. In particular, how does the central domain relate to the average domain? The authors seem to suggest that these two concepts are roughly equivalent, stating that SRM balances worst-case robustness with average performance, but do they really? Suppose there are two different training domains, one of which is much more likely than the other (e.g., camels are more likely to appear in deserts than on grasslands). In this case, the average domain is heavily tilted towards the more likely one, while the central domain is more like a uniform mixture of the two domains. If they are indeed different, why should one value the central domain more than the average domain?
- It is mentioned that a computationally efficient Gaussian-based approximation is used to estimate the 2-Wasserstein distance between data distributions, but many important details of this process are not provided. For example, was the distance computed over the input features or features extracted by some pre-trained model? If it was the latter case, which models were used?
Other Comments Or Suggestions: The paper is well-written and easy to follow. I didn't find any typos or inconsistencies.
Questions For Authors: Please see "Other Strengths And Weaknesses".
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful feedback and acknowledging the novelty of our work. We appreciate the thoughtful comments and address them point-by-point below.
**1. Why Ensemble Learning? Can SRM Improve the OOD Performance of Individual Models?**
We agree that SRM is a general framework that extends beyond ensembles. Recent studies have shown that no single model can perform well across all OOD scenarios, and ensemble learning has emerged as a promising paradigm that leverages the complementary strengths of diverse models. With the growing availability of pretrained models in repositories like HuggingFace, it becomes increasingly practical and impactful to study how to effectively ensemble existing models, rather than focusing solely on training new ones.
That said, SRM can indeed be applied to train individual models. To demonstrate this, we adapted SRM for single-model training and evaluated it on the PACS benchmark. Our method achieves a +0.6% average gain over CORAL, the current SOTA, showing its applicability beyond ensembles:
| Method | Art | Cartoon | Photo | Sketch | Average |
|----------------|------------------|--------------------|--------------------|--------------------|-----------|
| ERM | **88.1 (0.1)** | 77.9 (1.3) | 97.8 (0) | 79.1 (0.9) | 85.7 |
| Group DRO | 86.4 (0.3) | 79.9 (0.8) | **98.0 (0.3)** | 72.1 (0.7) | 84.1 |
| CORAL (SOTA) | 87.7 (0.6) | 79.2 (1.1) | 97.6 (0) | **79.4 (0.7)** | 86.0 |
| SRM | 87.6 (0.5) | **82.0 (0.6)** | **98.0 (0.1)** | 78.8 (1.3) | **86.6** |
**2. Diverse Model Pools**
Following the reviewer’s suggestion, we evaluated SRM with a more diverse ensemble pool by incorporating models trained with ERM, Mixup, and CORAL. The table below shows results on TerraIncognita:
| Algorithm | L100 | L38 | L43 | L46 | Avg |
|--------------|------|------|------|------|------|
| Uniform | 50.4 | 42.6 | **62.0** | 38.7 | 48.4 |
| ERM | 51.3 | 43.7 | 61.4 | 39.5 | 49.0 |
| Group DRO | 50.9 | 43.2 | 61.8 | 39.2 | 48.8 |
| SRM | **52.1** | **44.3** | 61.5 | **40.0** | **49.5** |
The strong result of Uniform on L43 may stem from complementary patterns captured by the diverse models. Still, SRM consistently provides better overall performance. We will include this in the revised paper. All code used is provided in the supplementary materials.
**3. Related Work on Balancing Average and Worst-case Risks**
We thank the reviewer for pointing out relevant related work. SRM shares the goal of balancing robustness and generalization with Dual Risk Minimization (DRM) [1] and Quantile Risk Minimization (QRM) [2], but differs in both formulation and implementation:
- DRM fine-tunes models using concept descriptions, while QRM offers probabilistic guarantees by minimizing quantile risk.
- SRM, in contrast, introduces a structural prior derived from a distributional graph and centrality measures, offering a complementary perspective grounded in geometric relationships among training distributions.
We will expand the related work section to explicitly discuss these connections in the revised manuscript.
**4. Central Domain vs. Average Domain**
The central domain differs from the average domain (uniform mixture) in that it emphasizes global influence within the distributional graph, not just equal weighting. For instance, in PACS (training: Art, Photo, Sketch; test: Cartoon), our graph identified Art as the most central domain. We hypothesize this occurs because "Art" inherently combines photographic elements with artistic styles, making it structurally influential.
When all distributions have equal centrality, our prior reduces to a uniform prior, demonstrating that average-case optimization is a special case of SRM. This connection is discussed in lines 142–144, and empirically validated in Table 1, where SRM consistently outperforms the uniform prior across all DomainBed datasets.
**5. Clarification on 2-Wasserstein Distance**
The 2-Wasserstein distance used for constructing the distribution graph is computed over features extracted by the ERM-trained model with the highest validation accuracy. We will clarify this implementation detail in the revised version.
We sincerely thank the reviewer again for the constructive feedback. Please let us know if you have any further questions or suggestions. We would be glad to elaborate.
References:
[1] Li et al. "Dual Risk Minimization: Towards Next-Level Robustness in Fine-tuning Zero-Shot Models." NeurIPS 2024.
[2] Eastwood et al. "Probable domain generalization via quantile risk minimization." NeurIPS 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clear response. The added experiments look good to me. However, I still don't quite understand the motivation/utility of the central domain.
By definition, the average performance of a model is the model's performance on the *average* domain.
Given that the goal is to balance robustness and average performance, my question is: why optimize the worst-case around the central domain instead of the average domain?
Using the PACS example, it's like: why only focus on the "Art" domain if we can train on all the domains (put another way, would the former lead to better average performance than the latter)?
In this context, the point that the central domain is "structurally influential" seems vacuous to me because the average domain already fully "covers" the entire graph and thus is at least as structurally influential as the central domain.
The paper currently lacks a clear comparison between SRM and this strong baseline using the average domain.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your thoughtful follow-up. Your question is now clearer to us, and we apologize for not addressing this point more directly in our initial rebuttal.
Let us revisit your example: suppose we have two training domains—camels in deserts (majority) and camels in grasslands (minority). Since camels more frequently appear in deserts, empirical risk minimization (ERM) tends to prioritize this majority domain. Models trained with ERM generally perform well when the test domain closely resembles the training domains. We agree with the reviewer that in this scenario, the central domain coincides with the average domain. *(Note that in such cases, a meaningful distributional graph cannot be learned, and the prior defaults to a uniform distribution. Please refer to our response to Reviewer mDcw regarding this limitation of our method.)*
However, the key assumption underlying our approach is that the test domain is not necessarily close to any single training domain, but is still **related** to them. This assumption is aligned with the domain adaptation theory of Ben-David et al. [1], where the H-divergence between the test and training domains is bounded by a threshold. This setting is reflected in datasets like those in DomainBed, for example:
1) PACS: Photo, Art, Cartoon, Sketch
2) TerraIncognita: images captured from distinct geographical locations.
As requested by the reviewer, we have followed up on the experiments described in our rebuttal Q2 (Diverse Model Pools) and evaluated the performance when replacing the central domain prior in SRM with the average domain prior. The results are presented below:
| Algorithm | L100 | L38 | L43 | L46 | Avg |
|-------------------------|------|------|------|------|------|
| Uniform | 50.4 | 42.6 | 62.0 | 38.7 | 48.4 |
| ERM | 51.3 | 43.7 | 61.4 | 39.5 | 49.0 |
| Group DRO | 50.9 | 43.2 | 61.8 | 39.2 | 48.8 |
| Average Domain Prior | **52.1** | 43.7 | **62.3** | 39.4 | 49.4 |
| SRM | **52.1** | **44.3** | 61.5 | **40.0** | **49.5** |
We observe that in domain L43, the Average Domain Prior achieves the best performance, and in L100, it performs nearly identically to SRM (52.09% vs. 52.13%). However, SRM outperforms the average domain prior in the other two domains. We speculate that domains L43 and L100 (with accuracies of 61.5% and 52.1%) are more closely aligned with the training domains, while L46 and L38 (40% and 44.3%) are more distant.
Additionally, we would like to clarify that in the PACS example, our focus is not limited to the “Art” domain. The centrality scores in this case are: Art (0.38), Photo (0.33), and Sketch (0.29). Theorem 4.2 in our paper shows that the centrality measure naturally assigns higher scores to distributions in denser regions of the distributional space.
We will incorporate these insights into our revised manuscript. We sincerely appreciate the reviewer’s constructive feedback, which has helped us improve the clarity and quality of our paper. Please let us know if you have any further questions or suggestions.
**Reference**
[1] Ben-David, Shai, et al. *A theory of learning from different domains.* Machine Learning 79.1–2 (2010): 151–175. | null | null | null | null | null | null |
Diagonal Symmetrization of Neural Network Solvers for the Many-Electron Schrödinger Equation | Accept (poster) | Summary: This work investigates the impact of symmetrization in neural network wave functions for periodic solid systems. Specifically, the authors compare data augmentation, group averaging, and canonicalization. Contrary to other fields, the authors find that symmetrization may hurt performance, but post-hoc averaging generally yields better estimates.
## After the rebuttal
I increased my score in light of the authors’ rebuttal. However, I remain skeptical about the correctness of the evaluation. Biased estimates in VMC have frequently been a point in NN-VMC, and the reason why the number of MCMC steps typically increases linearly with the system size, as seen in [1]. Still, my main concern is the limited scope of this work. While it is certainly very valuable, I find the current scope too limited for a top-tier conference. I encourage the authors to broaden their evaluation, supporting wider applicable statements.
[1] von Glehn et al. "A Self-Attention Ansatz for Ab-initio Quantum Chemistry"
Claims And Evidence: The authors make the following claims:
1. symmetrization to diagonal groups destabilizes training and can lead to worse performance.
2. post hoc averaging is effective in improving neural network solvers.
The paper supports these claims but to a somewhat limited degree due to the limited evaluation and specific problem statement. There appear to be theoretical misunderstandings about how to compute energy gradients in VMC. This most likely invalidates the data augmentation results. In general, contrary to the text statement, computing energy gradients does not involve computing $\partial_\theta\partial^2_x\psi_\theta$ but is done by taking the derivative of $\nabla_\theta E_{\psi^2}[\psi^{-1}H\psi]$ which corresponds to $E[(E_L - E[E_L])\nabla_\theta\ln\psi]$ where $E_L=\psi^{-1}H\psi$ as the Hamiltonian is hermitian, i.e., $E[\nabla_\theta E_L]=0$. Thus, the gradient we're computing is $\int (\nabla \frac{\psi(x)^2}{\langle\psi\vert \psi\rangle}) E_L(x) dx$. Getting the expectations right is important. In the data averaging setup, the distribution of the expectations is changed yielding incorrect estimates and gradients. Further, contrary to standard data augmentation, a batch always includes the original sample and all augmentations, which is not the case in the standard case.
Another problem to support these claims is that the evaluation is limited to a single neural network wave function, limiting the impact of the statements on this neural network and the concrete symmetry groups. However, I acknowledge that the list of known symmetries is short for open boundary conditions.
Methods And Evaluation Criteria: The chosen periodic compounds are well chosen but the selection could be expanded to a cover a larger number of systems.
Theoretical Claims: It is unclear to me what the message of Proposition 4.1 is. I generally found the notation unnecessarily complicated and partly undefined, e.g., the definitions of F and Q in l. 169/170.
Whether sampling or gradient computations dominate the computations typically depends on the system size in VMC. For many electrons, sampling may take more time; e.g., in [1], the authors increase the number of sampling steps with the system size, and the gradient computation can be significantly accelerated [2].
[1] von Glehn et al. "A Self-Attention Ansatz for Ab-initio Quantum Chemistry"
[2] Li et al. "A computational framework for neural network-based variational Monte Carlo with Forward Laplacian"
Experimental Designs Or Analyses: See above regarding the validity of the data augmentation scheme. Additional wave functions would be interesting, e.g., [1].
[1] Gerard et al. "Transferable Neural Wavefunctions for Solids"
Supplementary Material: I skimmed Appendix E but did not read the proofs.
Relation To Broader Scientific Literature: Prior literature in neural network VMC focused primarily on accurate calculations by opening all degrees of freedom. This is mostly because most works focus on open boundary conditions where little is known about the symmetries of the wave function itself, only about the symmetries of its observables.
Essential References Not Discussed: The authors could discuss the difference to canonicalization schemes like [1,2] which focus on obtaining the right symmetries for observables rather than the wave function itself.
[1] Gao et al. "Ab-Initio Potential Energy Surfaces by Pairing GNNs with Neural Wave Functions"
[2] Gerard et al. "Gold-standard solutions to the Schrödinger equation using deep learning: How much physics do we need?"
Other Strengths And Weaknesses: The authors strongly focus on the negative results. Unfortunately, I don't see these supported very well due to the limited evaluation and partially flawed evaluation. The post-hoc averaging technique presents a valuable improvement for future solid VMC calculations but may be too limited in scope to justify an acceptance. Extending the scope to other structures and models or finding efficient symmetrization techniques that scale well would strengthen the position of the paper.
Other Comments Or Suggestions: The justification for not including DA as a post hoc method is very obscure (l. 358), but given that it would yield incorrect estimates, it should anyway not be included.
The statement in robustnuss to outliers about the laplacian is incorrect; in practice, one does not compute $H\psi$ and then divides by $\psi$ but instead directly uses the formulation $\psi^{-1} H \psi=\nabla^2\ln\psi + (\nabla\ln\psi)^2$.
Questions For Authors: * Is there a way to translate these insights to open boundary conditions?
* Can we incorporate the right symmetries directly in the architectures like SO(3)-equivariant force fields?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback and apologize for any confusion that arise from presentation issues or differences in understanding, which we now address.
---
**Correctness of gradient.** We thank the reviewer for the typo: $\partial_\theta \partial_x^2$ should indeed be $\partial_x^2$. The expectation is **otherwise correct**: F and Q are just generic update terms that
- accommodate specific implementations of the gradient e.g. KFAC;
- allow Prop 4.1, Lem 4.2, Thm D.1 to be *not tied to* a specific implementation.
We also stress that **the reviewer's formula is exactly the one that we compute numerically**; DA is run directly on the original DeepSolid except that gradient is computed on an augmented set of samples.
---
**Validity of DA,** which seems to be the main concern.
- We stress that DA is *not* the method we champion. It's a popular ML method that one expects to help under approx. invariance; we follow its standard usage and discuss how it can fail.
- As stated in Sec. 4.1, we follow standard usage to randomly sample DA **with replacement** (note the i.i.d. part). Unlike what the review suggests, the data is **not guaranteed** to include *either* the original *or* all the augmented data.
- Indeed DA computes $E[F_{X,\psi}]$ under a different distribution of X without changing $\psi$. Yet this gives a **biased estimate, not an invalid estimate**. There are many other sources of bias e.g. stochastic gradient, KFAC approx etc., **some of which are known to help training**, and the gradient estimate is never unbiased in practice. The DA estimate is physically valid when $\psi$ is exactly invariant -- where DA does nothing -- and gives a regularized estimate when $\psi$ is approx. invariant. In ML, DA is routinely used under approx. invariance ([6,7,8] cited in the response to Reviewer mfuU) and **its bias is known to enforce regularization towards symmetry** ([5,9] cited in the response to Reviewer mfuU). For us, approx. invariance of $\psi$ is enforced by pre-training and visible in Fig. 5(c). Moreover, DA gives **a physically valid energy** (Fig. 5(a)), just not an improved one. **We do agree this is important to clarify and will add this point.**
---
**Sampling vs gradient cost.** We thank the reviewer for highlighting the nuance that there may be better algorithms s.t. per-iter gradient compute is faster than per-iter sampling, esp. if the system requires more MCMC steps for good sampling. While we don't observe this in our setup, **we'll address it in the discussion**. We stress that the per-iter compute comparison is *not central* to our arguments, with some small additional nuances:
- DA instability: Our DA batch size is k x N/k = N. If sampling dominates, one may exploit the sampling speedup by DA to get N' > N/k samples. Yet if N' < N, the use of fewer i.i.d. samples still inflates the variance; if N'=N, DA doesn't destabilize but the k DAs always increase compute cost. The tradeoff now depends on how large N' can be and thus how fast gradient compute is v.s. sampling.
- GA: Unchanged; both sampling and gradient costs increase.
- PA: Still computationally attractive, as overall training cost typically outweighs inference cost. The per-iter MCMC steps is multiplied by a large no. of training iters.
---
**Correctness of robustness.** We stress that the reviewer gives a practically useful but **mathematically equivalent** formula to $\psi^{-1} H \psi$, to which the *exact same discussion* is true, just more cumbersome. Note that differentiating $\ln \psi$ gives $\psi^{-1}$.
---
**Over-emphasis on the negative results; more scalable symmetrization needed.** We note that our work **mainly seeks to offer perspectives rather than a universally good symmetrization**: We study a particularly hard case of symmetries and examine the strenghts and limitations of known ML symmetrizations. Given the success of posthoc averaging, one naturally expects in-training averaging / augmentations would help even more. **Our negative results are crucial in examining their potential failure points** and why that's not the case for PA. We also expect the negative results to be of **independent interest** to the applications of DA and GA **beyond a physics context**. We do agree that finding more scalable symmetrization for a large class of setups is an interesting open problem. Yet as the reviewer suggested, this requires finding the best combination of architecture, optimizer and symmetrization, and it's hard to tell whether the improvement is from symmetry or from other factors; see response 1.3) to szMY. We believe **part of the merits of our work is on an apples-to-apples comparison on a fixed architecture with v.s. without symmetry.** We will clarify these in the revision.
---
**References and questions**: See response 1) to szMY.
---
If the above help to answer essential concerns behind the rejection, e.g. validity of DA, we'd be grateful if you'd consider raising the score. | Summary: The paper studies diagonal group symmetries in neural network solvers for many-electron Schrödinger equations, comparing different symmetrization approaches: data augmentation (DA), group averaging (GA), and post-hoc averaging (PA). The main claim is that in-training symmetrization can hurt performance while post-hoc averaging helps.
Claims And Evidence: The paper's theoretical analysis of gradient variance (Proposition 4.1) is well-derived and shows how DA introduces a variance inflation factor (k-1)/N. This manifests empirically in Figure 3, where DA with k=12 shows ~1.5x higher normalized gradient variance compared to baseline.
The post-hoc averaging results are compelling for the tested systems. e.g. improves energy from -8.138 to -8.1507 and reduces variance. However, the paper only tests three systems all using the DeepSolid architecture. Testing on more architectures (e.g., FermiNet, PauliNet) would strengthen the claims about PA's general effectiveness.
The claim that "in-training symmetrization destabilizes training" seems to conflict with the finding in (https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.5.013216), which demonstrates that properly incorporating symmetries into network architecture can actually improve training stability and efficiency in quantum lattice models. While the settings are different (lattice vs continuous systems), this contradiction deserves more discussion. The authors' theoretical analysis focuses on computational-statistical tradeoffs, but may be missing other mechanisms through which symmetries could aid optimization.
Methods And Evaluation Criteria: The evaluation uses standard VMC metrics: ground state energy and local energy variance. The energy improvements are significant given that chemical accuracy is ~0.00159. However, the paper lacks evaluation of other physical properties like electron density or correlation functions that could provide additional validation.
The computational cost analysis in Table 1 is detailed but raises questions. For graphene 1x1, GA with N=1000, k=12 takes 25s per step versus 2.5s for baseline, which is a 10x slowdown. The paper doesn't fully explain this large difference given that only k=12 group operations are being averaged. Additionally, the choice of KFAC optimizer is not justified - recent work has shown other second-order methods can be more effective for quantum many-body problems (e.g. https://www.nature.com/articles/s41567-024-02566-1).
Theoretical Claims: The high-dimensional CLT result (Theorem D.1) provides rigorous bounds on the distribution of gradient updates. However, the key assumption that seems strong and its practical validity isn't verified empirically.
The smoothed canonicalization analysis shows an O(nk) cost scaling with number of electrons and group size. This explains why SC performs poorly in practice, but the paper could better explain if this limitation is fundamental or implementation-specific.
Experimental Designs Or Analyses: The training setup uses KFAC optimization with batch sizes adjusted for computational budget. However, the sensitivity to these choices isn't explored - would different batch size ratios change the relative performance?
The MCMC chain length of 30000 steps for evaluation seems reasonable but isn't justified. Given that PA's benefits come from better sampling, analyzing how results vary with chain length would be valuable.
Supplementary Material: The appendices contain detailed proofs, additional experimental results, and comprehensive technical details. The organization is clear and supports the main text well. However, some critical experimental details are only found in the appendix, and certain aspects of the computational cost analysis need more detail. The smoothed canonicalization analysis could be more complete, and implementation details would be valuable for reproducibility.
Relation To Broader Scientific Literature: The paper provides some coverage of recent neural network VMC methods but comparisons with other symmetry approaches are limited.
Essential References Not Discussed: https://www.nature.com/articles/s41567-024-02566-1 (second order optimization)
https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.5.013216 (symmetry)
And many other foundational works in VMC are also missing, including
https://www.science.org/doi/10.1126/science.aag2302
and many other works from the authors of the papers mentioned above
Other Strengths And Weaknesses: Already mentioned above.
Other Comments Or Suggestions: No other comments.
Questions For Authors: 1. For LiH, PA with F222 (k=16) achieves most of the improvement of Fm$\bar{3}$m (k=192). Is there a systematic way to choose minimal symmetry groups that capture most benefits?
2. The gradient variance analysis assumes first-order optimization, but experiments use KFAC. Can the theory be extended to second-order methods? Would this explain why GA performs better than DA despite similar variance bounds?
3. For bcc-Li, different subgroups (P4/mmm, Fmmm) give similar improvements despite capturing different symmetries. What determines which symmetries are most important for improving the wavefunction?
4. Why was KFAC chosen as the optimizer? Have you compared with other second-order methods like (https://www.nature.com/articles/s41567-024-02566-1)
5. How does your post-hoc averaging approach compare with other symmetry preservation methods (e.g. https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.5.013216, where they show incorporating symmetries architecturally can improve training in quantum lattice models.
6. How does your post-hoc averaging approach handle solutions allowing for different symmetry sectors?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback, which we address below. We will add **a new discussion and limitation section** to reflect the discussions.
---
1) **PA's effectiveness on more architectures.** Indeed it's interesting whether PA is effective beyond DeepSolid. We stress that, as motivated in Sec. 2, we focus on a class of difficult restricted symmetries, i.e. diagonal space groups, for which only **model-agnostic symmetrizations are known**, which lead to our choice of DA, GA and PA. In the context of these symmetries, **DeepSolid is the only architecture we know that performs well on infinite periodic solids**, and is itself based on FermiNet. We do expect that for the *non-solid* systems handled by the original FermiNet and PauliNet, **architectural symmetrization** (e.g. the invariant maps discussed in our Sect. 2) may be possible and can be more effective than PA -- see response 1) to szMY.
---
2) **Conflicts with known benefits of symmetries.** The seemingly conflicting conclusions is related to point 1 in that we were forced to consider *model-agnostic symmetrizations*. For symmetries where *architectural symmetrization* is known, we don't expect them to suffer from the same cost tradeoffs or the same instability we saw for DA and GA. **We agree that symmetries may act via mechanisms we didn't address.** Here, we **try to minimize the influence of other mechanisms** in 2 ways: a) Focusing on the apples-to-apples comparison on a **fixed architecture with fixed optimizer and hyperparameters** and only vary the symmetrization; b) Citing a high-d CLT result that reduces the effect of DA to bias and variance (considered in a rich body of work on Gaussian universality and e.g. [5] cited in the response to Reviewer mfuU), and understand their effects empirically by stability and energy performance. **There could be other mechanisms that we didn't observe, and we'll mention this as a limitation.** We also stress that, in view of recent findings in AI4Science (see our response 1.3) to szMY), whether symmetry helps with performance is an open question in general, and **our apples-to-apples comparison consitutes a concrete attempt to minimize confounding effects** in the cases of DA, GA and PA.
---
3) **10x compute difference.** Apologies if it's unclear. GA with 12 symmetries requires 12x more evaluations of the network at the different inputs, which is expected to increase the cost significantly. This is why batch size was set to N/k to keep the same cost.
---
4) **KFAC vs alternatives.** We choose KFAC as it's the choice for DeepSolid and to ensure a fair comparison with the original model; see 2) above.
---
5) **Explanations of SC.** The limitation is specific to the ``average-near-the-boundary" approach of building SC. Detailed explanations are in Appendix E, as the precise mathematical setup of SC is tedious (also the case in Dym et al. '24, who proposed the SC method we adapt). **Whether an efficient SC is possible for complicated groups is an open theoretical question:** Dym et al. only recently establish impossibility results for specific groups, and as mentioned in our Sec 2, building SC for diagonal space group is related to the problem of maximal invariants and orbifolds, which are unsolved math problems. **We'll ensure these are clarified in the SC section.**
---
6) **Batch-size ratios.** If we read the qn correctly, different ratios **are already investigated** in Fig. 7 of Appendix B.4.
---
7) **MCMC length.** We picked 30k as we empirically found that the sampled values had stabilised at this length. To address the review, we obtained additional runs for LiH PA with F222: The energy and var values are (-8.147(1), 0.018(1)) for 20k and (-8.1486(9), 0.0167(9)) for 40k, both within error range of the 30k result. Note that PA's benefits aren't from better quality of sampling but from the fact that it is sampling a different wavefunction.
---
8) **References; compare with other symmetrizations.** See above comments and response 1 to szMY.
---
9) **Theory for 2nd order methods.** This was discussed at the end of Sec 4.1, which highlights that the key barrier is analysing the high-dimensional Hessian matrix. As a further remark, a precise analysis rests on analysing the eigenvalues of a large random matrix; when the matrix does not have i.i.d. entries, this is a known hard problem in random matrix theory and no general results are yet known.
---
10) **Choice of groups.** The interesting observations in LiH and bcc-Li are indeed why we include the results. We don't have a concrete answer, but conjecture the effects to be a) system-specific and b) dependent on what approx. invariances the wavefunction was pretrained to possess. **We'll discuss these as interesting directions for future work.**
---
We hope we've addressed as many comments as possible within the response limit. If they are helpful in addressing your concerns, we would be grateful if you would consider raising the score.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttal. I have raised my score. | Summary: This paper investigated different methods for incorporating diagonal symmetries in neural network solvers for the many-electron Schrödinger equation, with a particular focus on variational Monte Carlo (VMC) methods. Specifically, the authors studied three main approaches to enforce diagonal invariance: data augmentation, group averaging and canonicalization (adding invariant features). Surprisingly, the authors found that in-training symmetrization often hurts performance, while posthoc averaging improves accuracy and symmetry of the learned wavefunctions. Both theoretical and empirical results are presented to justify the efficacy of posthoc averaging.
## Update After Rebuttal
The reviewer is generally positive about the results presented in the paper and satisfied with the authors' responses to the questions raised here. Hence, the reviewer would like to remain the score.
Claims And Evidence: Here is the main claim made in this paper: posthoc averaging is a better choice to enforce diagonal invariance compared to in-training symmetrization. Sufficiently many experiments on Graphene, Lithium Hydride (LiH) and Metallic Lithium (bcc-Li) are included to compare postdoc averaging with other in-training symmetrization methods like data augmentation and group averaging, which supports the main claim in a clear and convincing way.
Methods And Evaluation Criteria: Yes. The experiments are mainly based on Graphene, Lithium Hydride (LiH) and Metallic Lithium (bcc-Li), which are standard examples used in the DeepSolid paper. The authors studies different symmetrization methods (data augmentation, group averaging and posthoc averaging) and compare their performance based on the induced wave functions and associated quantities like the ground state energy.
Theoretical Claims: The only theoretical claim in the main text of the paper is Proposition 4.1. Its proof in Appendix G has been verified to be correct.
Experimental Designs Or Analyses: Yes, please refer to the "Methods And Evaluation Criteria" section above.
Supplementary Material: Yes, I did review proofs of the theoretical results presented in Appendix D, G and H. I didn't find any significant error.
Relation To Broader Scientific Literature: This work is mainly posited within the AI4Science literature on solving high-dimensional many-body Schrödinger equation based on wavefunctions parametrized via neural networks, which has applications in physics, chemistry and material sciences. In terms of methodology used in this paper, it mainly falls within the category of symmetry + machine learning.
Essential References Not Discussed: The reviewer finds that several key references are not discussed in this work. For instance, the authors didn't cite [1], which is, to the best of the reviewer's knowledge, one of the pioneering works that try to solve the many-body Schrödinger equation based on neural networks. Furthermore, in addition to the literature cited in the article, some other work like [2,3,4,5,6,7] have also studied the problem of incorporating symmetry within the machine learning based solvers of many-body Schrödinger equation. It might be meaningful for the authors to cite these articles and briefly discuss them as related work.
References:
[1] Carleo, Giuseppe, and Matthias Troyer. "Solving the quantum many-body problem with artificial neural networks." Science 355, no. 6325 (2017): 602-606.
[2] Mahajan, Ankit, and Sandeep Sharma. "Symmetry-projected Jastrow mean-field wave function in variational Monte Carlo." The Journal of Physical Chemistry A 123, no. 17 (2019): 3911-3921.
[3] Han, Jiequn, Linfeng Zhang, and E. Weinan. "Solving many-electron Schrödinger equation using deep neural networks." Journal of Computational Physics 399 (2019): 108929.
[4] Zepeda-Núñez, Leonardo, Yixiao Chen, Jiefu Zhang, Weile Jia, Linfeng Zhang, and Lin Lin. "Deep Density: circumventing the Kohn-Sham equations via symmetry preserving neural networks." Journal of Computational Physics 443 (2021): 110523.
[5] Lin, Jeffmin, Gil Goldshlager, and Lin Lin. "Explicitly antisymmetrized neural network layers for variational Monte Carlo simulation." Journal of Computational Physics 474 (2023): 111765.
[6] Abrahamsen, Nilin, Zhiyan Ding, Gil Goldshlager, and Lin Lin. "Convergence of variational Monte Carlo simulation and scale-invariant pre-training." Journal of Computational Physics 513 (2024): 113140.
[7] Zhang, Yaolong, Bin Jiang, and Hua Guo. "SchrödingerNet: A Universal Neural Network Solver for the Schrödinger Equation." Journal of Chemical Theory and Computation 21, no. 2 (2025): 670-677.
Other Strengths And Weaknesses: This paper used a mixture of theory and practical experiments to justify the advantage of posthoc averaging compared to other approaches like data augmentation and group averaging, which is one of the first work that compares different ways of enforcing symmetry in the context of solving high-dimensional many-body Schrödinger equation. However, some potential drawbacks also exist. Firstly, the study is mainly limited to the Variational Monte Carlo (VMC) solver, so it might be meaningful to explore other solvers like Diffusion Monte Carlo (DMC) as well - see for instance [1]. Secondly, it seems that the claims and proofs in the paper can be made in a more mathematically rigorous way (which can be left as future work). For instance, below are some related literature that studies solving high-dimensional partial differential equations (PDEs) and symmetry from the perspective of statistical learning theory: [2,3,4,5,6,7,8,9,10].
References:
[1] Han, Jiequn, Jianfeng Lu, and Mo Zhou. "Solving high-dimensional eigenvalue problems using deep neural networks: A diffusion Monte Carlo like approach." Journal of Computational Physics 423 (2020): 109792.
[2] Jiao, Yuling, Yanming Lai, Dingwei Li, Xiliang Lu, Fengru Wang, Yang Wang, and Jerry Zhijian Yang. "A rate of convergence of physics informed neural networks for the linear second order elliptic pdes." arXiv preprint arXiv:2109.01780 (2021).
[3] Duan, Chenguang, Yuling Jiao, Yanming Lai, Xiliang Lu, and Zhijian Yang. "Convergence rate analysis for deep ritz method." arXiv preprint arXiv:2103.13330 (2021).
[4] Lu, Yiping, Haoxuan Chen, Jianfeng Lu, Lexing Ying, and Jose Blanchet. "Machine learning for elliptic PDEs: Fast rate generalization bound, neural scaling law and minimax optimality." arXiv preprint arXiv:2110.06897 (2021).
[5] Lu, Jianfeng, and Yulong Lu. "A priori generalization error analysis of two-layer neural networks for solving high dimensional Schrödinger eigenvalue problems." Communications of the American Mathematical Society 2, no. 1 (2022): 1-21.
[6] Lu, Yulong, Jianfeng Lu, and Min Wang. "A priori generalization analysis of the deep Ritz method for solving high dimensional elliptic partial differential equations." In Conference on learning theory, pp. 3196-3241. PMLR, 2021.
[5] Zweig, Aaron, and Joan Bruna. "Symmetric single index learning." arXiv preprint arXiv:2310.02117 (2023).
[6] Zweig, Aaron, and Joan Bruna. "Exponential separations in symmetric neural networks." Advances in Neural Information Processing Systems 35 (2022): 33134-33145.
[7] Zweig, Aaron, and Joan Bruna. "Towards antisymmetric neural ansatz separation." arXiv preprint arXiv:2208.03264 (2022).
[8] Zweig, Aaron, and Joan Bruna. "A functional perspective on learning symmetric functions with neural networks." In International Conference on Machine Learning, pp. 13023-13032. PMLR, 2021.
[9] Soleymani, Ashkan, Behrooz Tahmasebi, Stefanie Jegelka, and Patrick Jaillet. "Learning with Exact Invariances in Polynomial Time." arXiv preprint arXiv:2502.19758 (2025).
[10] Tahmasebi, Behrooz, and Stefanie Jegelka. "The exact sample complexity gain from invariances for kernel regression." Advances in Neural Information Processing Systems 36 (2023).
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback. As all reviewers suggest additional references with some overlaps and similar questions on extendability, and due to the 5000 character limit per response, we address them together below.
---
1) **Essential references.** We thank all reviewers for mentioning interesting works. In the revision, we will **extend the introduction** and **include a new discussion and limitation section** to discuss these works. In detail,
**1.1) Essential pioneering works on neural network VMC solvers:** We will highlight [1] suggested by szMY (and also by wZs6) and other works by authors of [1].
**1.2) Comparisons to symmetrizations that do not address diagonal space group symmetries or architectures that are not known to be effective in solids.** We first stress that we focus on **a class of difficult restricted symmetries**, i.e. diagonal space groups, which notably has only *discrete* point group symmetries. As observed by Reviewer wZs6, this is quite different from continuous symmetries like SO(2) and SU(2). **We realize this may not be sufficiently emphasized in our abstract and intro and will do so in the revision.** Also see our Section 2 for why these restricted symmetries are harder than e.g. the richer continuous symmetries in E(3), where the same discussion can be extended to SO(2) and SU(2). Architecture-wise, we stress that in the context of diagonal space group symmetries, **DeepSolid is the only architecture we know that performs well on infinite periodic solids**. Reviewer vG6b has pointed out interesting papers under Essential References and notes alternative approaches that focus on symmetries of observables rather than of wavefunctions. We agree that these are parallel and complementary approaches of interest and will cite these works. We clarify that for solids, our considered symmetries are typically applicable to ground state wavefunctions that are described by states with zero crystal momentums. We also note that, it is still an open question whether the architectures in those papers are effective in infinite period solids (since no solid benchmarks are reported) or for addressing the symmetries of the periodic solid. Making them work with solids would require additional tweaks, and **it will not be clear whether performance improvements or drops come from symmetrization or from additional architecture / hyperparameter tweaks**. This would invalidate our apples-to-apples comparison. However, we **wholeheartedly agree that it is important to survey more existing works on other symmetries**. We will cite [2,3,4,5,7] suggested by szMY and the symmetry papers suggested by wZs6 and vG6b, and highlight the differences. See also the response to wZs6 about why our findings seemingly contradict known benefits of symmetries.
**1.3) Extensions to broader contexts.** **We agree with reviewer szMY** that it is interesting to understand the effects of DA, GA and PA in other solvers like DMC. **We also agree with reviewer vG6b that** it is interesting whether our insights are applicable to open boundary problems and SO(3)-equivariant force fields. We do want to stress that **our findings are specific to complicated symmetries for which natural invariant maps are unknown**, which force us to adopt *model-agnostic approaches from conventional ML*. For simple symmetries or continuous symmetries like SO(3), where many *architectural symmetrization approaches* are available, we expect different results, e.g. no tradeoffs in in-training symmetrization. Nevertheless, **given the recent findings in protein structures and atomic potential ([1] and [2] cited in the response to Reviewer mfuU) that symmetries are unnecessary for training**, it is indeed of general interest how much symmetry plays a role in performance versus other factors e.g. hyperparameters. Our work can be viewed as **a first step towards this question in VMC**, by doing the first apples-to-apples comparison on an architecture before and after symmetrization. **The suggested extensions by the reviewers are very interesting future steps and we will mention them in the new discussion and limitation section.**
---
2) **Theory papers, szMY.** We thank the reviewer for suggesting these very interesting works on analysing high-dimensional Schrödinger equations (including [6] suggested by the reviewer in **essential references**). Indeed, our analyses are constrained by our desire to stay as close as possible to the DeepSolid VMC setup; a more careful analysis involves overcoming theoretical obstacles such as convergence guarantee of KFAC and short MCMC chains in high dimensions. We will discuss these suggestions as future directions to explore.
---
We hope these answer your questions. If our responses are helpful, we would be grateful if you would consider raising your score.
---
Rebuttal Comment 1.1:
Comment: The reviewer would like to thank the authors for the detailed response, which has addressed most questions raised in the reviews. Overall, the reviewer is positive about the results and would like to remain the score. | Summary: This paper investigates methods for incorporating diagonal group symmetries into neural network wave function ansatze for solving the many-electron Schrödinger equation via Variational Monte Carlo (VMC). The authors compare three main approaches: data augmentation (DA), group averaging (GA) and canonicalization. The central claim is that, unlike typical machine learning scenarios, explicitly enforcing symmetry in-training can destabilize the optimization process and potentially lead to worse performance compared to post-hoc symmetrization. The methods are evaluated using the DeepSolid architecture on systems like H2, LiH, and graphene. The paper also introduces a method for visualizing diagonal symmetries in the high-dimensional wavefunction space.
## Update after rebuttal
Thank you for the detailed rebuttal. Your clarifications on the broader context has convinced me about it's relevance to ICML. I am increasing my score accordingly and look forward to the revised manuscript.
Claims And Evidence: The central claims are: in-training symmetrization (specifically group averaging) can destabilize VMC training and lead to poorer performance; post hoc averaging (PA) improves energy, variance, and symmetry effectively and efficiently. The claims are supported by both proofs and empirical results across different crystalline solids.
Methods And Evaluation Criteria: Yes, the methods and evaluation criteria are largely appropriate for the problem of developing symmetry-aware neural network solvers for quantum many-body problems.
* The paper investigates symmetrization techniques: DA, GA, PA and PC
* The authors use appropriate metrics (energy and variance in in local energy) and propose a new metric Var[PA/OG] to provide a quantitative measure for symmetry. They also use GPU hours as a measure of computational cost.
* The models are benchmarked on 3 crystalline solids with DeepSolid as the baseline.
Theoretical Claims: As a reviewer whose expertise lies in physics-informed ML for DFT and not specifically in VMC and associated ML methods, I have reviewed the statements of the theoretical results but have not rigorously checked the correctness of the proofs provided in the appendices. They seem plausible and the authors strengthen the claims with empirical results and have detailed proofs in appendices.
Experimental Designs Or Analyses: As stated in methods and evaluation criteria, the experimental design and analyses are reasonable, with good baselines, datasets, metrics and ablation studies.
Supplementary Material: I reviewed the appendices, particularly going through experimental and computational details, visualization explanation, details on DA, GA, and PC but skimmed through the proofs (especially sections E, F, G).
Relation To Broader Scientific Literature: The paper is well-written and establishes itself well in the context of NN solvers for ab-intio methods for the Schrödinger equation, symmetrization in machine learning and VMC methods.
Essential References Not Discussed: I cant definitively identify essential missing references within the core VMC or equivariant ML literature for diagonal groups. The paper appears to cite key works in neural network wavefunctions, general ML symmetrization and VMC.
Other Strengths And Weaknesses: The paper makes useful claims about properties of different symmetriziation approaches, including the counterintutive claim (from a standard ML viewpoint), that data augmentation can destablilize training. It provides a systematic comparison of these approaches applied during training and post-hoc. It is well-written and supports its main claims with both proofs and emperical results.
While this is a well-written paper with detailed proofs and emperical results, I am concerned that it is too specialized for a venue like ICML and might be more suited to a physics journal.
Other Comments Or Suggestions: 1. The paper could benefit from a brief discussion of how the findings might apply to other areas of physics simulation where similar symmetry constraints arise.
2. The visualization method in App C is a nice addition.
Questions For Authors: How sensitive are the results, particularly the ranking of the methods (OG, DA, GA, GAs, PA), to the choice of neural network architecture and hyperparameters like the learning rate and batch size? Could the instability observed with GA be tackled with careful hyperparameter tuning specific to that method?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive and helpful feedback. We address the questions below.
---
1) **Relevance to other areas of physics simulation with similar symmetry constraints arise.** We agree that this would add value and is a very interesting avenue of future work. **In the revision, we will have a discussion and limitation section to discuss the relevance and comparisons with known cases of symmetries in other setups.** We do note that, as the other reviewers have also brought up comparisons with symmetrisation approaches in VMC for non-solid contexts (see response 1.2 to sZMY), we will prioritise those comparisons first and, if space permits, include a brief discussion on implications for other physics setups.
---
2) **Interest for the ICML community.** We believe the general message of this paper, especially the part regarding the surprising effects of DA and GA in training, **is of broad interest to the ML community even beyond a physics setup**. While symmetry has become the staple in highly structured problems such as those in AI4Science, the observed benefits of symmetry in practice are often entangled with architectural changes, optimizer changes and hyperparamter choices. **Recent findings in protein structures [1] and atomic potential [2] showed that symmetries may actually be unnecessary for performance improvement**, calling into question how much symmetry helps with performance versus other factors e.g. hyperparameters. Our work can be viewed as **a first step towards this question** in the specific context of VMC for solids, as we perform **a strict apples-to-apples comparison on a fixed architecture with fixed optimizer and hyperparameters** and only vary the symmetrization methods. The computational-statistical tradeoffs we see for DA and GA are in fact applicable to any ML setups where sampling is performed in between gradient updates; one non-physics example in ML would be the contrastive divergence algorithm for training energy-based models (see e.g. [3] and [4]). **We will make sure this general applicability is mentioned in the revision.**
---
3) **Sensitivity of the results, e.g.~the observed instability, to the choice of architecture and hyperparameters.** We thank the reviewer for the interesting question. For varying batch sizes, we have included a preliminary ablation test in Appendix B.4. For the learning rate, we used the default setup from DeepSolid to ensure an apples-to-apples comparison across all setups. In view of our theoretical results, we believe that the instability findings and the computational-statistical tradeoffs are independent of these hyperparameters. **For architectures, we only used DeepSolid since it is the only architecture we know that performs well for VMC with infinite periodic solids, and because we want to perform an apples-to-apples comparison to an architecture that is known to do well in these problems.** We do conjecture that the effects of symmetrization may change in the context of other architectures. One example is the situation where, the system of interest possesses simpler symmetries and where more efficient symmetrizations exist (discussed in more detail in response 1.2 to reviewer szMY). Meanwhile, we agree that it is interesting to explore whether there exists a way to tweak DeepSolid in the ``most optimal way" for DA or GA such that it outperforms PA. Although we feel that this is out of scope for the current paper, we do hope that our findings pave the way for these investigations: This is both by our discussions on the computational and statistical costs of DA and GA that one needs to be aware of when benchmarking the performances, and by offering tools such as the visualization method for understanding symmetry improvements.
---
We hope these answer your questions. If our responses are helpful, we would be grateful if you would consider raising your score.
---
**References used for this response and other responses**
[1] Abramson, Josh, et al. "Accurate structure prediction of biomolecular interactions with AlphaFold 3." Nature 2024
[2] Qu, E, and Aditi K. "The importance of being scalable: Improving the speed and accuracy of neural network interatomic potentials across chemical domains." NeurIPS 2024
[3] Hinton, G. E. "Training products of experts by minimizing contrastive divergence." Neural computation 2002
[4] Du, Y, et al. "Improved contrastive divergence training of energy based models." ICML 2021
[5] Chen, S., Edgar D., and Jane H. Lee. "A group-theoretic framework for data augmentation." JMLR 2020
[6] Lyle, C., et al. "On the benefits of invariance in neural networks." arXiv:2005.00178
[7] Benton, G., et al. "Learning invariances in neural networks from training data." NeurIPS 2020
[8] Yang, Jianke, et al. "Generative adversarial symmetry discovery." ICML 2023
[9] Balestriero, R, Leon B, and Yann L. "The effects of regularization and data augmentation are class dependent." NeurIPS 2022
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed rebuttal. Your clarifications on the broader context has convinced me about it's relevance to ICML. I am increasing my score accordingly and look forward to the revised manuscript. | null | null | null | null | null | null |
Feasible Action Search for Bandit Linear Programs via Thompson Sampling | Accept (poster) | Summary: This paper studies the linear feasible action search problem, where the goal is to find a point in the input space such that some linear constraints are satisfied. In each round the learner can query a point and observe a noisy version of its constraints. The authors provide a Thompson sampling approach to this problem and provide theoretical guarantees on its performance, both in the standalone feasible action search problem and when applied to the safe linear bandit problem.
Claims And Evidence: - Some hyperparameters of the algorithm are not clearly specified (e.g. omega).
- From the writing of the paper, I do not understand if the noise distribution is inflated.
- Theorem 8 is not written very coherently. The sentence below says that it reduces to something much more interpretable, which is not clear to see. Perhaps it would be better to just state the more interpretable condition?
- On pg7 there is a discussion on the mild dependence on m, but m is not clearly defined here.
Methods And Evaluation Criteria: - There are no experiments in the main paper. In the appendix experimental results are shown but these are just comparison of their algorithm with different parameter settings. The algorithm run in these experiments seems to have a slightly different stopping time, it would be good to see how the original algorithm performs.
- Most importantly, it would be interesting to see an experimental comparison to Gangrade et al (2024b).
Theoretical Claims: - Statement and proof of Lemma 11 contains many typos.
- Lemma 5 – not clear what ‘under ball’ means? Not enough details were provided to verify proof sketch of Lemma 5. Looking at the full proof of Lemma 5 in B.1, there were also some steps I could not verify, e.g. line 858.
- I tried to verify the proof of Theorem 8 but could not. I particularly struggled with lines 1019-1025.
Experimental Designs Or Analyses: I do not follow the motivation for the experimental setup, nor do I really understand what all the results are showing. It would be good to clarify the writing of Appendix A and give motivation for all parameter choices.
Supplementary Material: I looked quickly at the experimental results and some of the proofs.
Relation To Broader Scientific Literature: - More referencing of related work that this paper builds upon would be good, e.g. line 55 the authors state that their approach builds on a recent bandit feasibility test but do not cite the work proposing this.
- The authors mention that their approach builds on prior work by Gangrade et al (2024b) but the comparison of the results in this paper to those results is not sufficient. The authors claim their approach is more computationally efficient (which I can believe), but they do not mention how their theoretical bounds compare to those in Gangrade et al (2024b). Additionally, it would be good to see experiments to demonstrate the claimed efficiency gains.
Essential References Not Discussed: Missing discussions of Bayesian anslyses of TS, e.g. Russo & Van Roy
Other Strengths And Weaknesses: The introduction included a lot of notation which was not always defined. This made reading it quite difficult.
It would be good to give specific examples of where it is not possible to know a safe action a priori – this doesn’t seem like too strong of an assumption in many cases (e.g. often there is prior data available). However, it appears to be one of the key motivations of this work so some clear examples would be good.
Other Comments Or Suggestions: Proofread carefully and make sure all notation is defined (and also include reminders to save having to scroll pages to find something hidden in the middle of a sentence).
Questions For Authors: 1) Theoretical and experimental comparison to Gangrade et al (2024b).
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Our thanks for your work.
Comparison to Gangrade et al 2024b. Thanks for pointing out these omissions, which we will fix.
- Theoretical: Gangrade et al 2024b only study testing. We can extend it to FAS via something like our Lemma 7. In this case, the bound is a) $O(d^2/\varepsilon^2 M_*^2)$ for their Algorithm 1, and b) $O(d^3/\varepsilon^2 M_*^2)$ for the $(2d)^m$-game relaxation proposed on their page 9.
- These are similar to Thm 9, up to a factor of $d$ for us relative to a). This shows up in all known efficient methods (line 347, col 2).
- While they don't talk about safety costs, similar bounds to ours should also hold for them.
- Experimental: As discussed starting line 150 col 1, the method of Gangrade et al. is inefficient. In particular, they propose choosing actions by solving $(2d)^m$ matrix games in each round (page 9 of their paper). The simulations in Appendix A work with $d = m = 9,$ and our method takes milliseconds for each round on a laptop. However, their proposal would need to solve $(18)^9>10^{11}$ games per round, which would literally take years!
- We will include a comparison for small $d,m$.
- However, making their method practical for nontrivial $d,m$ is an open problem beyond the scope of our submission. This would require studying how to appropriately relax and solve the optimisation problem described in (our) line 160 col 1.
- In particular, we want to point out that even simple ideas like nets don't work, since one would need a net of resolution $t^{-1/2},$ meaning $\Omega(t^{d/2})$ games per round, which is prohibitive even for $t\approx 100, d = 10.$
---
Claims & Evidence: We disagree that terms are not defined, but are happy to clarify.
- $\omega_t$. This is defined on line 184 col 2 and used throughout the paper. It is the standard radius of the confidence ellipsoid.
- Noise distribution: The underlying noise distribution is stated as $\mathrm{Unif}(\sqrt{3d} \cdot \mathbb{S}^d)$ in Theorem 9. This is indeed "inflated" in the sense of the common terminology for linTS (although our paper is not about linTS).
- $m$: Recall that $\Phi_* \in \mathbb{R}^{m \times d}$ (line 30 col 2, line 162 col 2). So $m$ is the number of rows of $\Phi_*$, or equivalently, the number of unknown constraints. Line 334, col 1 (just before Theorem 8) actually explicitly says this.
- Theorem 8: We disagree. This theorem first describes a condition on a measure $\nu$. Then it defines a measure $\mu$ as the law of $\mathbf{1}_m \zeta^\top$ when $\zeta \sim \nu$ (equivalently, a pushforward of $\nu$ under the map $\zeta \mapsto \mathbf{1}_m \zeta^\top$), and says that $\mu$ follows a $(B,\pi)$-CO condition with instantiations of $B,\pi$ in terms of the condition on $\nu$. The CO-condition itself is described in Def. 4, and forms one of the central technical concepts of the paper.
----
Theoretical Claims:
- "Under $\mathsf{Ball}_t$": as stated on line 154 col 1, "An inequality $a \le b$ is said to hold under an event $E$ if $a {1}_E \le b {1}_E$, where ${1}_E$ indicates $E$.'' The statement of the lemma defines the event $\mathsf{Ball}_t$, and the inequality of line 267 col 2 holds once multiplied by an appropriate indicator. This "under an event" terminology is used to lighten the visual load of expressions in the cramped two-column format.
- Line 858: by definition, $\bar{\Phi}_t$ minimises $K(\Phi)$ over $\mathcal{E}_t$. Under $\mathsf{Ball}_t,$ $\tilde\Phi_t \in \mathcal{E}_t,$ so this inequality is trivial.
- Theorem 8 proof, line 1019 - 1025: There is a typo in line 995: as stated in the statement of Theorem 8, $H = \mathbf{1}_m \zeta^\top,$ so there is an extra $-$ sign in this line. If this is corrected, things should make sense.
- The display in 1019-1021 is opens up $H_t$. Then, by the discussion previous to 1019, if $\zeta^\top u\_t \ge \\|u\_t\\|,$ we find that $\tilde{\Phi}\_t a\_{\*} \ge M\_{\*} \mathbf{1}\_m,$ which implies local optimism (the event $\mathsf{L}\_t$). But the condition on $\nu$, and the independence of $\zeta$ from the history directly means that this even has chance at least $p$ given the history (which fixes $u_t$). Finally, the norm of the rows of $H$ is just the norm of $\zeta$, which is also controlled by the condition on $\nu$.
- Typo in Lemma 11: thanks. It should read $\\| (\Phi - \Psi) a\\|_\infty$.
- We will make sure to proofread the paper closely, and ensure that symbols are reiterated in their context to smooth over the issues you describe.
----
Relation to broader literature:
- Line 55: This is the work of Gangrade et al. 2024b (discussed line 150 col 1).
- Theoretical bounds of Gangrade et al. 2024b: see above.
----
Rationale of the experiments: In a nutshell, appendix A attempts to build and study a practical methodology out of the theoretical study in the main text. The plan is discussed in detail on page 11. We will both discuss the early stopping trick in the main text, and include plots without it.
---
Rebuttal Comment 1.1:
Comment: Thanks for clarifying some of my confusion about notation. I will keep my score the same as I still think experimental comparison to Gangrade et al 2024b would be good, and have not yet seen the results. I hope in the revised version the authors also make it explicit that Gangrade et al 2024b have improved theoretical guarantees.
---
Reply to Comment 1.1.1:
Comment: Apologies - we didn't realise that you wanted to see the experimental comparison. A pilot version of this is included below.
---
We ran a implementation of the method of Gangrade et al in the setting of their experiment with varying $M_*$ (which they call $\Gamma$), following their section 5. In particular, we set $d = 4, m = 2$. However, unlike their experiment, we study take the domain to be the box $a \in [-1/2,1/2]^d$ (they looked at a ball domain instead), which allows us to be slightly faster in solving the games. Noise standard deviation is $0.1,$ and $\delta = 0.1$.
Within this setup, we studied the two scenarios that Gangrade et al look at (up to the ball \to box domain change)
1. We study infeasible scenarios with the constraints $x_1 \ge M_*$ and $x_1 \le -M_*$, in which case the instance is infeasible, and the margin is $M_*$.
2. We study the feasible scenario with the constraints $x_1 \ge 1/2 - M_*$ and $x_2 \ge 1/2 - M_*$. Again, the margin is $M_*$. We set the approximation level in this case to $0.7$ If accepted, we will expand this to studying various values of $\varepsilon$, and more granular variation in $M_*$.
Each experiment was carried out $50$ times, and we studied $|M_*| = \{0.2, 0.4, 0.6, 0.8\}$ in both scenarios. Since you had indicated that you wanted to see results without early stopping, we do exactly that: both methods are run without the early stopping heuristic. Note that both methods would be sped up by this.
*Observations.* Firstly, every run of both methods was correct - this indicates that both methods are too conservative. However, surprisingly, not only does FAST take less time per round, but it also takes significantly fewer rounds in total! This can partly be expected: instead of (relaxed) linUCB/OFUL, FAST is using thompson sampling based exploration, and so should enjoy the improved behaviour usually seen for TS. The difference becomes even more stark upon realising that the stopping time for FAST improves quadratically in the advantage that TS has. This, coupled with the ultimately quite lossy $L_1$-based relaxation used for EOGT (Gangrade et al) might explain the improvement.
Mean stopping time for infeasible case (mean over 50 runs, rounded to nearest whole number)
|$ M_*$ | $-0.2$ | $-0.4$ | $-0.6$ | $-0.8$ |
|------------|-------|------|------|------|
| Ours | 887 | 170 | 58 | 24 |
| EOGT-based | 14436 | 2840 | 1031 | 374 |
Mean stopping time for feasible case (mean over 50 runs, rounded to nearest whole number)
| $M_* $ | $0.2$ | $0.4$ | $0.6$ | $0.8$ |
|------------|-------|------|------|------|
| Ours | 1000 | 224 | 95 | 39 |
| EOGT-based | 16472 | 3809 | 1598 | 618 |
Computational Costs: note that ultimately EOGT requires solving $(2d)^m = 64$ games per round, and takes about $16$ times more samples, so in total costs about $1000\times$ the cost of our method in this setup. To allow this simulation cheaply, instead of using a library linear programming method, we hard coded the fact that $a$ lies in a box domain to reduce to optimising $\\| \lambda^\top \hat{\Phi} \\|_{1}$ for $64$ choices of $\hat{\Phi}$ in each round (further using the fact that $\lambda$ is essentially one dimensional since there are only two constrants, and $\lambda$ sums to $1$). This took roughly one hour for the whole EOGT-based simulations, while the whole simulation for our method took about 2 seconds in total. | Summary: This paper studies the problem of identifying a point that satisifies a set of linear constraints, using only noisy observations of the linear constraints.
They give an algorithm that identifies a strictly feasible point after $\tilde{O}(\frac{d^3}{\epsilon^2 M^2})$ rounds, where $M_*$ is the largest safety margin of any action.
They then consider the safe linear bandit setting and show that by combining their algorithm with standard safe linear bandit algorithms, they an guarantee $\tilde{O}(\sqrt{T})$ regret and $\tilde{O}(1)$ cumulative violation (hiding all non $T$ factors).
## update after rebuttal
I maintain my positive score. The authors addressed the minor concerns that I had.
Claims And Evidence: Overall, the claims look okay. I am looking for some clarification on the guarantees for the safe linear bandit setting as detailed in the Questions box \# 3.
Methods And Evaluation Criteria: The approach of using thompson sampling for efficient decision-making is sensible. Evaluating the approach with stopping time, safety cost, regret, and risk makes sense.
Theoretical Claims: My only possible concern with the theoretical claims is the proof of Corollary 10, which I detail in \# 3 in the Questions box.
Other than this, I did not check the proofs in close detail, but the theoretical results are consistent with what I would expect.
Experimental Designs Or Analyses: There are no experiments.
Supplementary Material: I only looked at the proof of Corollary 10 in the appendix.
Relation To Broader Scientific Literature: To my understanding, the paper has the following contributions to the literature:
1. An efficient algorithm to the problem of identifying a feasible point of linear constraints with noisy feedback.
2. The analysis approach extends the linear TS approach of Abeille and Lazaric (2017) to this new setting.
3. An algorithm for safe linear bandits with $\tilde{O}(\sqrt{T})$ regret and $\tilde{O}(1)$ risk without knowing a feasible point. I think this contribution is more minor given that a simple extension of Gangrade et al. (2024b) would give the same guarantees.
Essential References Not Discussed: None that I can think of.
Other Strengths And Weaknesses: No others
Other Comments Or Suggestions: No others
Questions For Authors: 1. The problem setup states that the algorithm only needs to handle the cases where the optimal margin is either strictly less than zero $M_* < 0$ or strictly greater than zero $M_* > 0$. As such, this doesn't handle the case where $M_* = 0$. I think it is important for the paper to discuss the case of $M_* = 0$ as the problem is still feasible in this case.
2. What is the advantage of interpreting the minimum of the linear functions $\min_i \Phi_i a$ via the zero-sum game formulation $\min_{\lambda \in \Delta^m} \lambda^\top \tilde{\Phi} a$ ? This is mentioned several times and appears in the algorithm design, but at the end of the day it looks like the parameter distribution is chosen to be the same for each row of $\Phi$ (actually each row uses the same realization). Couldn't we have gotten to the same result by treating each of the rows as separate linear functions and then used the minimum, as is done in the regret minimization safe linear bandit literature (e.g. Pacchiano et al. 2021)? I agree that the paper's approach is elegant, but the game theoretic formulation also seems to add more difficulty in understanding.
3. In the application to safe linear bandits in Corollary 10, the paper applies the algorithm LC-LUCB from Pacchiano (2024). To my knowledge, LC-LUCB requires a safe action $x_0$ and the constraint value at that action $\Phi_* x_0$. However, the paper only passes a safe action $x_0$ and the margin lower bound $M_{out} \leq \min_{i} (\Phi_* x_0)_i$. I believe that it needs to be clarified how exactly LC-LUCB was applied here.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Our thanks for your work.
Questions:
- Corollary 10: This is indeed an important point, thanks for your careful reading. The issue is resolved by Remark 11 (page 8) and Appendix F.1 (page 47) of the paper of Pacchiano et al. (https://arxiv.org/pdf/2401.08016).
- In short, remark 11 summarises how it is enough for them to have a lower bound on the safety margin of the safe action that is a constant factor away. In our case, the margin of the output $\langle a\rangle_\tau$ is bounded between $L_\tau$ and $U_\tau$, and by the stopping condition, with $\varepsilon = 1/2,$ we have $L_\tau > U_\tau/2 > M(\langle a\rangle_\tau)/2$, so we get exactly such an approximation. There is however, a small modification needed (as discussed by Pacchiano et al.) in that the orthogonal projection stuff in LC-LUCB cannot be carried out with only a lower-bound. This does not affect the regret bounds.
- We will expand upon the discussion of section B.4 to discuss this aspect.
- We also wanted to point out that in fact most such methods for SLBs only really need a lower bound on the margin of the safe action (and the regret then scales with this bound). Intuitively, the point is just that this bound yields an initial exploratory set, and determines the rate of expansion of the safe set estimate. LC-LUCB is a bit of a special case because of the orthogonal projection step they take.
- $M_* = 0$: We will be happy to discuss this. Short answer is that in this case no method can operate in finite time.
- As an example, consider the case $m = 1, d = 1$, and $\mathcal{A} = \{ 1\},$ with standard Gaussian feedback noise. Then the questions becomes one of testing if $\phi_* = 0$ or $< 0$ using samples from $\mathcal{N}(\phi_*,1)$. But with any finite number of samples $n$, we cannot reliably distinguish between $\phi_* = 0$ and $\phi_* \in (-O(1/\sqrt{n}),0)$, and so we can never reliably conclude the procedure in finite time. This simple scenario embeds into more rich situations, and so if the optimal margin is exactly $0$, we can never be $\delta$-sure of this fact, and so never stop in finite time. This is also reflected in the $\Omega(d^2/M\_{\*}^2)$ lower bound (line 340 col 2), which diverges to $\infty$ as $M\_{\*} \to 0$.
- Rows instead of $\lambda$: Yes, we can write things in this way with no change. The reason why $\lambda$ was introduced was to give ourselves the option of using the minimax theorem when working the results out. This (surprisingly) ended up not being needed, but by the time we figured this out, much of the paper was already written in this language, and it was too late to rewrite all of it.
---
Relation to broader literature: while we mostly agree with your reading, we would like to point out that while yes, an extension of Gangrade et al. 2024b along the lines of our submission would do this job, this method is completely impractical.
- Their proposed relaxation (page 9 of their ICML paper) requires solving $(2d)^m$ matrix games in each round. With $d = m = 9$ (the setting studied in our Appendix A), this requires $>10^{11}$ games in each round, and so each round would take years on a laptop, while our method runs in milliseconds.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' detailed and clear response. I maintain my positive score. | Summary: This work studies Feasible Action Search problems with linear constraints, in which a learner aims to find a point with maximal safety margin. The learner may declare that the constraints are infeasible if it fails to find a feasible point.
The authors suggest an algorithm called FAST. It is based on Thompson Sampling: in each time period, it makes a random perturbation on the estimated constraint matrix by adding some noise, and then selects a point that maximizes the safety margin with respect to the perturbed constraint matrix. The algorithm keeps track of an upper confidence bound on the optimal safety margin and a lower confidence bound on the progressive mean of safety margins, and employs a stopping rule such that declares infeasibility when the upper confidence bound drops below zero or outputs the average of attempted points when the lower confidence bound becomes $\epsilon$-close to the upper confidence bound. Under a carefully chosen noise distribution for random perturbation, the algorithm finds $\epsilon$-optimal point within $\tilde{O}(d^3/\epsilon^2 M_*^2)$ interactions with high probability, nearly matching the lower bound $\Omega( d^2/\epsilon^2 M_*^2 )$.
The algorithm FAST can also be useful for SLB problems in the sense that by running FAST in the initial phase the learner can find a feasible solution and then run SLB algorithms starting from the discovered feasible solution.
Claims And Evidence: This paper mainly focuses on making theoretical claims. The proofs look solid to me.
Methods And Evaluation Criteria: I believe that the suggested algorithm is sensible -- it can also be implemented in practice.
However, the choice of performance measure is not completely convincing to me. The suggested algorithm aims to maximize the safety margin, whereas the eventual goal would be to find out one feasible action.
Theoretical Claims: I carefully read the statements and the discussions provided in the main body of the paper, and I couldn’t find any suspicious claim. Although I am not that familiar with the random perturbation technique, I was able to understand the technical novelty made by the authors (global optimism vs. local optimism).
Experimental Designs Or Analyses: Some simulation study was included in Appendix. Although I couldn’t check all the details, the experiment was designed reasonably, and the behavior/performance of the algorithm is well reported.
Supplementary Material: The supplementary material was not submitted.
Relation To Broader Scientific Literature: The authors have already discussed the relation to a wide range of literature, namely, Best Arm Identification, Best Safe Arm ID, Minimax and Pareto Bandits, Feasibility Testing, etc. I guess that this work may also be related with online linear programming (e.g., Li and Ye (2021), Online Linear Programming: Dual Convergence, New Algorithms, and Regret Bounds), but I am not certain about the connection.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: This work adopts the safety margin as the learner’s objective. This choice allows the authors to utilize the techniques developed in minimax/pareto bandits, deviating from the existing work on safety bandits. This would be a unique feature of this work, but further justifications are needed. Why do we need the safest action if our goal is to optimize another objective eventually? Perhaps, the problem should be called “Safest Action Search” instead of “Feasible Action Search”.
Other Comments Or Suggestions: See Questions for Authors.
Questions For Authors: - Will the UCB version of the suggested algorithm work? I am imagining the UCB-like algorithm that runs without random perturbation. More specifically, the algorithm may pick up the query point optimistically within the confidence ellipsoids for $\Phi_*$, by solving $\max_a \min_\lambda \max_{\Phi \in C_t(\delta)} \lambda^\top \Phi a$. I guess that it could be easier for the authors to analyze this UCB version…
- Does the notion of local optimism lead to over exploration? Since the occurrence of local optimism ensures the occurrence of global optimism, making the local optimism occur sufficiently often may induce global optimism to occur too often.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Our thanks for your work.
- Max-safety-margin v/s only feasible. Thanks for bringing up this important point. The short answer is that our setup already accomodates the viewpoint you express through our use of the multiplicative approximation.
- Indeed, any feasible point would have a safety margin of at least $0$, and so would satisfy our notion of approximation with $\varepsilon = 1$ (since $0 \ge (1-1)M_*)$. So, if we just run FAST with $\varepsilon = 1,$ we will stop with a feasible action in $\tilde{O}(d^3/M_*^2)$ time, gaining a factor of $\varepsilon^2$.
- We note that the stopping time bound above is just as tight as the general bound in Theorem 9. By the same reasoning as in line 340 col 2, if $m = 1,$ then finding a feasible point is equivalent to finding an $M_*$-approximate optimum in a low-regret problem, and so needs $\Omega(d^2/M_*^2)$ samples. We lose a factor of $d$, which appears in all known efficient methods (line 346 col 2).
- Finding a point with near-maximum margin of course has practical applications, mainly in terms of building resiliency. For instance, a manufacturer balancing various quality and cost constraints would want the solution to be remain feasible under small perturbations of the process (translating to perturbations of $\Phi\_*$ in our setting). This would be true for a point with positive safety margin, but not so if the margin is close to zero. $\varepsilon$ captures a tradeoff between the costs of identifying such a point, and its resilience.
- The application of optimising a different objective later is captured well by the low-regret SLB problem discussed in section 4. Here, in fact, positive margin is a necessity, since this margin determines the rate at which information can be accrued in the early rounds (also see the lower bound due to Pacchiano et al. 2021). This is reflected in the regret bounds scaling with $M(a\_{\mathsf{safe}})^{-1}.$ Our proposed solution improves upon this general bound: we pay an extra constant factor in the exploration costs to end up with a point with close to optimal margin, which in turn improves the regret by a potentially large factor of $M_*/M(a\_{\mathrm{safe}})$ (Line 80, Col 1). If instead we had stopped with near-zero margin, the regret would suffer.
- Of course, here exact optimal margin is not necessary, which is why Corollary 10 sets $\varepsilon = 1/2$ for this application.
- We do want to say that your point is well-taken, and these aspect should indeed have be clarified and discussed in more detail in the paper. If accepted, we will include something like the above to clarify these issues in the paper.
- UCB version: yes, this works, but is computationally infeasible.
- This method was proposed as a test by Gangrade et al 2024b (and can be extended to FAS via a tracking result similar to our Lemma 7).
- However, as discussed in the paragraph starting line 150, col 1, for continuous $\mathcal{A}$, this technique needs to solve $(2d)^{m}$ games in each round. This makes it completely impractical even in the setting we simulated in section A (since for $d = m = 9, (2d)^m > 10^{11})$. For instance, our method runs in milliseconds per round, but this method would take years. Finding an efficient UCB-version of such algorithms is an open problem.
- Does local optimism lead to over exploration? This may indeed be true, which is why the main CO condition (Def. 4) is formulated in terms of global optimism.
- However, we note that our bounds on the optimism rate match those of prior work on single-objective TS, even though they are studying global optimism.
- To be explicit: under the same conditions on the noise in which prior work shows $\pi$-global optimism, our coupled construction gives $\pi$-local optimism. This suggests that new techniques are needed to control global optimism rates well not only for FAST, but for linTS in general. This is beyond the scope of our submission.
- Relation to online linear programming: while interesting, this appears to be a quite different problem. In the setup in the reference, the constraints are revealed one-by-one (and completely), while in our case, constraints are never directly revealed, but we can noisily measure (all of) their values at various $a_t$. Of course, the other important difference is that they are aiming to optimise, while we are aiming at maximin points. Nevertheless, we will read this thoroughly to see if it is appropriate to discuss. | Summary: The paper studies the feasible action search problem for linear bandits. In particular, the goal of the learner is to identify a point in a convex set that satisfies a set of linear constraints. The learner repeatedly interacts with the environment and receives as feedback the value of all the constraints for the played action with some additional noise. The learner should stop when it identifies an action that almost maximizes the safe margin or determines that the system of linear inequalities is unfeasible. The approach is based on Thompson Sampling. The authors prove that the algorithm stops in $O(\frac{d^3}{\epsilon^2 {M_*}^2})$ rounds providing $O(d^3/|M_*|)$ cumulative constraint violation, where $M_*$ is the optimal safety margin. It detects feasibility or identify an $1-\epsilon$ multiplicative approximation of an optimal solution. The designed algorithm can be exploited to derive novel results for regret minimization in linear bandits with safety constraints. In particular, it can be used to remove the assumption that the learner knows a safe action.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No.
Experimental Designs Or Analyses: No.
Supplementary Material: No.
Relation To Broader Scientific Literature: The problem under study is novel, even though it has many connections with other works extensively discussed in the paper.
Essential References Not Discussed: The literature on primal-dual methods for online learning with long-terms constraint is closely related to Low-Regret SLBs. There, one crucial component is the estimation of the Slater's parameter, which is essentially your optimal safety margin. Probably, the most closely related work is [1], which derives a general primal-dual method (that can be applied to linear bandits) to learn a strictly feasible solution. Even if not explicitly stated, their results should imply some constant (independent from $T$) bounds for your setting.
[1] Castiglioni, M., Celli, A., Marchesi, A., Romano, G., & Gatti, N. (2022). A unifying framework for online optimization with long-term constraints. Advances in Neural Information Processing Systems, 35, 33589-33602.
Other Strengths And Weaknesses: The paper studies the important problem of learning safe strategies in linear bandits. This is a crucial component for designing regret minimization algorithms for linear bandits. Moreover, the proposed algorithm is efficient and the theoretical analysis is involved. Finally, the work has interesting connections with other problems as discussed in the comprehensive analysis of related works.
Other Comments Or Suggestions: Line 29 right: $\alpha$ should be $0$.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Our thanks for your work. Thanks also for the interesting reference. The section 8 of this paper certainly describes a related idea, although there is no real adaptation to the value of the Slater parameter ($\equiv$ margin), which is an important aspect of our study. We will both try to carefully work out the concrete implications (by working out the primal/dual regrets appropriate here), and of course discuss this paper in the related work section. | null | null | null | null | null | null |
Towards Robustness and Explainability of Automatic Algorithm Selection | Accept (spotlight poster) | Summary: The paper focuses on the automatic algorithm selection problem. It aims to improve the explainability and robustness of algorithm selection. Main contributions include:
(1) The most significant innovation of this paper is that it changes the modeling approach of the algorithm selection task. Traditionally, models predict which algorithm to choose based on problem features and algorithm features. However, this paper's approach predicts algorithm features based on problem features. In other words, it focuses on "What characteristics an algorithm needs to solve a problem with specific feature values".
(2) Causal Modeling with DAG: The paper introduces a causal framework using DAGs to describe the underlying mechanism of algorithm selection.
(3) Model Framework: The DAG-AS model is based on a neural network framework incorporating causal learning principles. The model reconstructs algorithm features, and the final algorithm selection is made by comparing the oringinal features of candidate algorithm and reconstructed features of the optimal algorithm.
Claims And Evidence: Overall, the claims made in the submission are supported by clear and convincing evidence, including:
(1) DAG-AS's performance superiority
(2) Ablation study
(3) DAG-AS's robustness against distribution shifts
(4) The model-level explainability of DAG-AS
(5) The instance-level explainability of DAG-AS
Methods And Evaluation Criteria: The evaluation is conducted based on ASlib Benchmark and PAR10 Score, whose use aligns with the algorithm selection community's established practices.
Theoretical Claims: The paper presents Theorem 2.3 to support the use of causal models in the algorithm selection task. The proof appears to be correct in its logical structure. It first shows the existence of an algorithm feature whose parent set is composed of problem features through a proof by contradiction. Then, it uses topological ordering and mathematical induction to construct functions for each algorithm feature's conditional distribution.
Experimental Designs Or Analyses: The experimental designs and analyses in the paper, including performance comparison, ablation study, robustness tests, and explainability experiments, are generally sound and valid for validating the DAG-AS model.
Supplementary Material: Yes. The Appendix contains background on algorithm selection and causal learning, a proof of Theorem 2.3, supplementary details of the causal learning module, methods for calculating exogenous variables for counterfactual explainability, and detailed experimental information, including benchmarks, comparing algorithms, performance comparison, ablation study, robustness evaluation, and demonstrations of model - level and instance - level explainability.
Relation To Broader Scientific Literature: Researchers in the AutoML and algorithm selection community will benefit from this study. The paper revolutionizes the modeling approach in algorithm selection, offering new inspiration for subsequent research, especially regarding explainability and robustness. However, while the paper uses techniques from causal learning and recommendation systems, its inspiration for these two fields is limited.
Overall, the primary audience for this paper is in the AutoML field, especially algorithm selection researchers. The findings and methods presented can be directly applied or adapted to improve algorithm selection algorithms, making it a valuable contribution to this specific research community.
Essential References Not Discussed: It is recommended to add a discussion on explainability in the Background section.
Other Strengths And Weaknesses: Strengths:
1. The paper introduces causal learning into the field of algorithm selection, offering a brand-new perspective to address the limitations of traditional methods. The DAG-AS model constructs a causal graph to clarify the causal relationships between problem features and algorithm features. This innovative modeling approach is groundbreaking in algorithm selection research.
2. The paper makes significant contributions in terms of explainability. The model-level explanation can visually display the dependency relationships among variables by analyzing the DAG structure, helping researchers understand the model's decision-making process. The instance-level explanation uses counterfactual interventions to accurately identify the key features that affect algorithm selection, providing actionable explanations for practical applications and enhancing the credibility and practicality of the model.
3. The experimental design is comprehensive and rigorous. The research covers various aspects such as performance comparison, ablation studies, robustness testing, and explainability verification, providing strong support for the viewpoints in the paper.
Weaknesses:
1. How to obtain algorithm features is a prerequisite for this study. The method in this paper assumes that algorithm features are known, but there are scenarios where algorithm features are not provided. Even in ASlib, only 10 datasets contain algorithm features. I suggest that the authors clearly introduce how to obtain algorithm features in the paper, which will improve the practicality of the method.
2. The Discussion in P7 is interesting. I think that after making significant changes to the modeling paradigm of algorithm selection in this paper, one of the key advantages is that the algorithm selection model can, in turn, promote the improvement of algorithms. Unfortunately, this paper does not discuss this content in detail. Therefore, I suggest either enriching this part of the content or including it in future work.
3. A minor issue: The symbol "a" in line 371 should be explained.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. The paper achieves multi-level explainability in algorithm selection, which represents the new value emerging from the application of causal concepts and recommendation idea into the algorithm selection field. Are the goals of this explainability consistent with those of the classic SHAP analysis? Or does the connotation of the explainability in this paper cover and go beyond what the SHAP analysis encompasses?
2. In instance-level explanations, the algorithm selection is explained by finding the minimum perturbations through counterfactual interventions. However, in practical operations, how can we determine the appropriate perturbation range and step size to ensure that effective explanations can be obtained? In the experiments of Figure 10, the authors achieved this by imposing certain constraints. But the optimization problem modeled in Eq. (15) does not involve these constraints. Is it necessary to make the form of the optimization problem in Eq. (15) consistent with that in the experiments?
3. How to obtain algorithm features is a prerequisite for this study. The method in this paper assumes that algorithm features are known, but there are scenarios where algorithm features are not provided. Even in ASlib, only 10 datasets contain algorithm features. I suggest that the authors clearly introduce how to obtain algorithm features in the paper, which will improve the practicality of the method.
4. In the ablation study, is the construction of the directed cyclic graph still based on the two assumptions of this paper? Why does the method using DAG perform worse than the method without considering causality in some scenarios?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > Weakness 1 / Q3
Thanks for your insightful comments. In practical scenarios, informative algorithm features are not always readily available, just as pointed out by Reviewer nswt. In this study, we recognize that the availability of algorithm features has a vital impact on the model's performance. Therefore, we will add an assumption and discussion about the informativeness of algorithm features in Section 2:
**Assumption 3**: Informative algorithm features are available for the algorithm selection task.
In the context of algorithm selection, the availability of informative algorithm features is a crucial yet often overlooked aspect. This assumption implies that we can obtain algorithm-related characteristics that carry meaningful information for differentiating the performance of different algorithms on various problem instances. This assumption serves as a fundamental basis for DAG-AS. Currently, there are several established algorithm representation methods, such as hyperparameters, model structure information, code-related statistical features, AST features, and code features extracted by LLMs. These features provided valuable insights into the algorithms' properties and significantly contributed to the performance of DAG-AS. However, we are also aware that in some cases, obtaining such informative features can be challenging. For example, code-related features cannot be obtained in the closed-source scenario.
> Weakness 2
We sincerely appreciate your valuable suggestions. Intervening on algorithm features indeed holds the potential to achieve broader goals beyond the algorithm selection task itself. In real-world scenarios, the most suitable algorithm may not always be present in the candidate set, or may not even have been designed yet. By examining $P(S=1 \mid \textbf{PF},do(\textbf{AF}=\textbf{a}+\delta_{\textbf{AF}}))$, we can explore how to adjust the optimal algorithms within the candidate set based on $\delta_{\textbf{AF}}$ for a given problem. This intervention can assist us in two ways:
1. It helps in searching for more suitable algorithms within or beyond the existing candidate set;
2. It generates valuable suggestions to aid in the automated algorithm design.
However, in the context of ASlib, due to the high level of abstraction of the algorithm features and the fact that many candidate algorithms are closed-source, we are unable to optimize these candidate algorithms according to the results of algorithm feature interventions. As a result, empirical study in this aspect is currently not feasible. In the future, such research could potentially be carried out in scenarios where the physical meaning of features is more explicit. For example, in the selection of deep-learning models, the hyperparameters and architecture features of the models can be used as algorithm features. Interventions on these features can help users to further improve existing models or generate new ones based on the candidate algorithm set.
Regarding the generation of new algorithms, we already have some well-developed ideas. Specifically, by combining the DAG learned by DAG-AS and the results of algorithm feature interventions, we can generate suggestions for improving candidate algorithms. These suggestions, along with the code of the candidate algorithms, can then be submitted to a LLM. Leveraging the code-related ability of the LLM, we can achieve the automated optimization and design of algorithms. We will incorporate the discussion into the final version to further enhance the depth of our research.
> Q1
The goal of classic SHAP analysis is to explain the prediction results of a model by calculating the contribution of each feature to the prediction. It focuses on quantifying the impact of individual features on the output within the model's framework. In contrast, our paper aims to achieve multi-level explainability in algorithm selection. It not only considers the influence of features on the selection result but also delves into the causal relations between problem features and algorithm features, as well as the overall causal structure. This helps users understand not only which features are important but also how they interact and influence the algorithm selection process in a causal sense.
> Q2
We appreciate your astute observation. We indeed introduced additional constraints to simplify the search process for determining the appropriate perturbation range and step size in instance-level explanations. As detailed in our last response to the Reviewer BWG1, we have proposed some methods to assist in solving the optimization problem. Due to space limitations, we cannot elaborate on these here. Moreover, it is unnecessary to adjust Eq.15 since this equation represents the original form of the optimization problem.
> Q4
The construction of the directed cyclic graph still based on the two assumptions. As for performance degradation on SAT03-16-INDU, please read the 5th response to Reviewer nswt.
---
Rebuttal Comment 1.1:
Comment: I have read the comments by the authors and have no further questions. | Summary: The paper introduces a new approach to algorithm selection using directed acyclic graphs (DAGs) and causal relations. The approach focuses on modeling the causal relations between problem features and algorithm features. The authors argue that this method not only improves the accuracy of algorithm selection but also enhances the model's robustness against distribution shifts and provides multi-level explainability. This approach allows the model to learn the strong feature representations for the most suitable algorithm given a specific problem. The authors of the given paper also propose a counterfactual explanation method to understand how minimal changes in problem features can lead to different algorithm selection outcomes, improving the interpretability of algorithm selection. The authors demonstrate the effectiveness of their approach through experiments on the ASlib benchmark, showing superior performance compared to many traditional algorithm selection methods.
Claims And Evidence: I strongly appreciate the proposed approach of applying causal relationship learning with algorithm selection. I agree with the authors that this is in fact a very novel approach for the domain of algorithm selection and makes major contributions to improving the performance of algorithm selection systems.
The empirical evidence of the paper is strong since the authors compare against established algorithm selection approaches on an established algorithm selection benchmark library (ASlib). However, I’m concerned about missing state-of-the-art approaches for algorithm selection. For example, in the Open Algorithm Selection Challenge ASAP won the competition. Other strong and established systems such as AutoFolio are also not considered. ISAC, SATzilla11 and SNNAP are very old approaches and not considered SOTA. (It is less clear for SUNNY since it would depend on the version of SUNNY.)
I very much like the idea of counterfactual interpretations of algorithm selection. However, it is not fully clear to me whether this depends on the DAG approach proposed by the authors or whether this is completely orthogonal to the rest of the paper. To my understanding, counterfactual interpretation can also be derived on correlation-based approaches.
I fully agree with the claim of robustness of the DAG approach and the conducted experiments.
Methods And Evaluation Criteria: The proposed method is very compelling and nicely motivated. The formal derivation is good and with sufficient background knowledge also fairly understandable. Some minor comments: It could be clearer which modeling assumptions are specific to algorithm selection and which ones are quite established modeling approaches for causal learning. Furthermore, I dislike that I had to jump to the appendix to read up on parts of the notation – without it, someone not being fully familiar with causal learning notation is lost in the main paper.
The evaluation on ASlib with PAR10 follows the common best practices of the community and thus it is very appropriate. However, it is a bit unfortunate that only some of the benchmark scenarios have algorithm features and some others don’t have them s.t. it is not quite clear whether the benchmarks could (unintentionally) biased.
Furthermore, I was missing any kind of baseline that makes use of algorithm features, as the ones cited by Tornede et al. As far as I know, none of the baselines use algorithm features, but only instance features.
Theoretical Claims: Theorem 2.3 and all the other theoretical derivations made sense to me intuitively, as I am someone with strong expertise in algorithm selection and less in causal learning. I have not read Proof B (in the Appendix) in detail.
Experimental Designs Or Analyses: The experimental design was overall convincing (with the few exceptions I mentioned above already). I also strongly appreciated the ablation and robustness study showing the strengths and advantages of the approaches and its components.
I was missing a bit more details on when the approach will perform well and why it failed to do so on two ASlib scenarios. The authors briefly mention aspects such as the amount of training data; however, for example, SAT03-16-INDU has a lot more training data than SAT11-RAND; nevertheless, DAG-AS performed very well on SAT11-RAND but not on SAT03-16-INDU. Later on, the authors argued with the sparsity of the DAG; however, also the additional experiments in the appendix (e.g., Figure 9) have not really helped me to get a good grasp of when DAG-AS fails and when not.
As discussed before, I’m least convinced by the interpretation advantage of DAG-AS. I agree that having a DAG is nice in practice, but DAGs as shown in Figure 8 are not helpful in understanding the underlying modeling problem and causal relationship. Although I am quite familiar with the ASlib scenarios, I was not able to get any good insight from Figure 8. Also as said before, I like the idea of applying counter-factual interpretations and, indeed, a DAG makes this a lot easier, but in principle, methods such as LIME could also be used similarly.
Supplementary Material: I have checked all the figures in the appendix but have not read all the explanations. However, with such a compelling appendix, I strongly wonder why this is not a journal submission. (Something seems to be broken in our system if this paper is submitted to a conference.)
Relation To Broader Scientific Literature: The related work is overall well covered. It seems to be a bit biased towards the traditional approaches in algorithm selection and discusses less modern approaches based on deep learning without explicit feature representations.
Essential References Not Discussed: None
Other Strengths And Weaknesses: I see a lot of potential in this paper because it provides a completely new direction for the algorithm selection community. It is the first very substantial progress in algorithm selection I have seen in recent years.
I was hoping to see more discussion on the intervention of algorithm features in this paper since causal reasoning would make this much more interesting. I wonder whether the authors were thinking about modifying instances or generating completely new instances – both are research directions already proposed (independent of causal reasoning) in the algorithm selection community.
Furthermore, informative algorithm features are not always easily available. Also, the computation graph features proposed by Pulatov et al. are not as trivial and it is unclear whether they are informative enough. I would love to have at least a short paragraph discussing the very important assumption on the availability of informative algorithm features.
Other Comments Or Suggestions: None
Questions For Authors: I have a bit of concern regarding your weighted loss function which seems to be fairly complex. I fully understand and agree with the individual components. But how have you managed to find the weights for your loss functions and on which benchmarks have you done this?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: > Experiments: Missing SOTA approaches. None of the baselines use algorithm features.
Thanks for your valuable suggestions. We acknowledge that our experiments lacked some SOTA comparison methods. In response, we have now incorporated ASAP and AutoFolio into our experiments.
Regarding the absence of methods based on algorithm features in our original study, there are two main reasons: 1) these methods mainly differ in feature types rather than model design, and 2) in some scenarios of ASlib, it was difficult to obtain the source code, which made it challenging to extract algorithm features, such as features derived from code or hyperparameters.
According to your suggestion, we selected the method proposed by Pulatov et al and AS-LLM as additional comparison baselines. Due to space constraints, we have plotted the experimental results in a figure and presented them at the anonymous link https://imgur.com/a/QmDoUeA. It should be noted that AS-LLM can only be evaluated in scenarios where the source code is available. From the experimental results, it can be clearly seen that DAG-AS still maintains the best performance in most scenarios.
> Whether counterfactual interpretations depends on the DAG-AS
Counterfactual interpretations are indeed closely related to DAG-AS. As described in Page 7, the three steps of counterfactual calculation all rely on the causal model learned by DAG-AS:
1. Abduction step needs to use the structural equations in DAG-AS to infer the exogenous variables;
2. Intervention step uses do-operator to modify the causal graph learned by DAG-AS.
3. Prediction step calculates the results based on the modified causal graph.
In summary, while counterfactual interpretations can be conceptually applied in other settings, in our work, the DAG-AS framework provides the necessary foundation for a systematic and accurate implementation of counterfactual analysis in algorithm selection.
> Which modeling assumptions are specific to algorithm selection and which ones are quite established modeling approaches for causal learning.
The two assumptions in the paper are both proposed based on the characteristics of the algorithm selection task, mainly to simplify the complexity of the causal graph. There are some assumptions in causal learning itself, such as causal sufficiency, which are not included in this paper.
> Someone not being fully familiar with causal learning notation is lost in the main paper.
We sincerely apologize for any confusion caused to readers. We will make adjustments and add a notation tabel into the final version.
> When DAG-AS fails and when not
We think that DAG-AS may not perform well under 3 cases:
1. When the features themselves lack informativeness or when the correlation between problem features and algorithm features is extremely weak.
2. When the data violates the causal sufficiency assumption. This means that there are confounding factors that are not included in the feature set.
3. When the causal relationships within the dataset are overly complex.
However, it should be noted that cases 2 and 3 can potentially be addressed by improving DAG-AS. In the causal learning field, there are numerous specialized models designed to handle complex causal relations, such as those dealing with multivariable or nonlinear causality. Additionally, there are studies focused on identifying causality in scenarios where the causal sufficiency assumption is violated. To enhance the performance of DAG-AS, more advanced causal learning models can be adopted to replace the Eq(8,9).
> I’m least convinced by the interpretation advantage of DAG-AS.
We sincerely appreciate your criticism. We acknowledge that the interpretability aspect in the context of ASlib may not be as immediately intuitive. The reason is that the algorithm features used in ASlib are AST features, which are highly abstract in terms of their physical meaning. However, the interpretability can be extremely valuable in scenarios where the physical meaning of features is explicit. For example, when considering the architecture features or hyperparameters of deep-learning models as algorithm features. In such cases, if the causal graph reveals which problem features influence the number of neurons in specific layers, this interpretability can significantly assist users in understanding the model design. It also provides great utility for experts when debugging the DAG-AS model.
**Due to space limitations, the following issues are addressed in the response of other Reviewers:**
> Paragraph discussing the assumption on the availability of informative algorithm features.
Please refer to the 1st response of Reviewer oy94 (Weakness1)
> More discussion on the intervention of algorithm features
Please refer to the 2rd response of Reviewer oy94 (Weakness2)
> Weights for loss function
Please refer to the 2rd response of Reviewer BWG1 (Weakness2)
---
Rebuttal Comment 1.1:
Comment: Many thanks for the reply, explanations and new results. Very much appreciated. I increased my score accordingly since I believe the few weaknesses do not matter in view of this important and very novel contribution to algorithm selection.
However, I would like to slightly disagree with your wording in the rebuttal:
> physical meaning of features is explicit
For algorithms, there is nothing like a physical meaning since they are always abstract concepts ;-)
If you would like to add this line of arguments to your paper, you will find better wording for that.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your strong support for our paper and your insightful review comments. Your feedback has been extremely valuable to us.
We acknowledge that stating there is a "physical meaning" for algorithm features was inaccurate. What we actually intended to convey is that when algorithm features possess an **intuitive or tangible meaning that is easily comprehensible to humans**, the explanations provided by the causal graph, as well as the interventions on these algorithm features, can significantly aid humans in understanding the algorithm selection mechanism and improving candidate algorithms.
We will use more precise wording in the final version of our paper to avoid such misunderstandings. Once again, thank you for your attention to detail and for helping us improve the quality of our work. | Summary: The paper argues that current approaches to automatic algorithm selection are mainly based on the correlation between algorithm performance and problem meta-features, which are susceptible to data bias and distributional variations, and lack robustness. The thesis proposes to use DAG to represent the causal relationship between problem features and algorithm features, to model the algorithm selection process, and to provide model-level and instance-level interpretability through counterfactual computation. In ASlib benchmark tests, DAG-AS outperforms existing methods in terms of robustness and interpretability.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes, the approach proposed by the authors models the relationship between problem features and algorithm features from a causal perspective and provides interpretability.
Theoretical Claims: The author's proof seems reasonable to me.
Experimental Designs Or Analyses: The authors‘ experimental setup seems reasonable to me, but in the experimental design for distributional shifts in Appendix E.4, I am puzzled by the authors’ simulation of ‘distributional shifts of the optimal algorithm’, the distribution of a particular optimal algorithm, because for a well-designed algorithm, its characteristics should be fixed, and for a defined problem characteristic, the optimal algorithm should also be defined. Intuitively, the optimal algorithm should also be fixed.
Supplementary Material: I have focused on appendices A, B, D, E.4, E.5 and E.6.
Relation To Broader Scientific Literature: Previous studies modelled the joint distribution of PF and AF, and the authors concluded that such an approach is sensitive to changes in the distribution and has poor robustness, and the authors proposed modelling P(AF|PF, S = 1) to consider automatic algorithmic choices from a causal perspective.
Essential References Not Discussed: The reference is sufficient.
Other Strengths And Weaknesses: Strengths:
1. Existing algorithm selection methods mainly rely on empirical correlation, the paper is the first to use DAG to describe the relationship between algorithmic features and problem features based on causal modelling, thus enhancing the robustness and interpretability of the model.
2. The paper provides an actionable explanation for algorithmic recommendations by analysing how problem characteristics affect algorithmic choices through minimal intervention in an instance-level explanation.
3. The paper uses multiple causal diagrams (DAG structure), heatmap, and performance comparison table to illustrate the advantages of DAG-AS, making the experimental results intuitive and easy to understand.
Weakness:
1. The question of the reasonableness of the distributional bias experiment (Appendix E.4(2)) setup, i.e., the reasonableness of modifying the optimal algorithmic distribution?
2. The authors do not have corresponding hyperparameters to balance the three aspects of constraints on causality, constraints on feature representation, and constraints on the DAG structure, and do not see this aspect discussed in the experiments. I think this is necessary.
Other Comments Or Suggestions: 1. It is recommended that the authors provide further details on the practical applications of this direction of automatic algorithmic selection as well as on the issue of efficiency to help the reader better understand the importance of this direction.
2. Even though the questions are different, I still recommend that the authors read [1], whose modelling of causality and discussion of it is very worthwhile for the authors to learn from.
[1] CausPref: Causal Preference Learning for Out-of-Distribution Recommendation
Questions For Authors: 1. Question about Eq. 2:
P(S = 1|PF, AF) seems to be the existing modelling approach, the authors proved through Bayes‘ Theorem the positive relationship between the modelling approach taken by the authors, P(AF|PF, S = 1), and P(S = 1|PF, AF), in my understanding, the advantage of causal modelling over correlational modelling lies in its identification of confounding factors and consideration of the variables’ causal relationships, why did the authors make this clarification? The advantages of modelling causality over modelling correlation seem obvious to me.
2. Is there an efficiency problem with exploring interpretability through counterfactual methods? Because it seems that the do operation needs to iterate through all PFs?Do the authors have anything relevant to say about this?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > Weakness 1
Thanks for your valuable comments. It's right that "for a defined problem characteristic, the optimal algorithm should also be defined", which implies that $P(AF|PF)$ remains constant. In Appendix E.4, our intention was not to change $P(AF|PF)$, but rather to manipulate the marginal distributions $P(AF)$ and $P(PF)$. The “Shift on Optimal Algorithm Distribution” was achieved through a deliberate selection process of the training and test data. Specifically, any candidate algorithm can be the optimal ones for a certain proportion of problems. We manipulate this proportion to induce a distribution shift of the optimal algorithm. For example, in the training data, algorithms A and B are the best-performing ones for 20% and 80% of the problems respectively, while in the test data, these proportions are reversed, with A being optimal for 80% and B for 20% of the problems. We construct the training and test sets by selecting problem instances from the original dataset according to these proportions. Since $P(AF|PF)$ is fixed in algorithm selection, any adjustment to either $P(AF)$ or $P(PF)$ will lead to a corresponding change in the other. Therefore, during the implementation of the “Shift on Optimal Algorithm Distribution”, the distribution of problem features also changed accordingly.
> Weakness 2
Thanks for your reminder. We have used hyperparameters to balance the impacts of various losses. As stated on line 278, “The overall loss function is the weighted average of the causal learning loss and algorithm selection loss.” However, we apologize for the misunderstanding caused by not presenting the final form of loss function. In the final version, we will provide the complete loss function with hyperparameters: $L = L_{\text{reconstruction}} + \alpha L_{\text{sparsity}} + \beta L_{\text{acyclicity}}+ \gamma L_{\text{selection}}$. Current experimental results were obtained under the same parameter settings, where $\beta=\gamma=1$ and $\alpha=0.0001$. Based on your suggestion, we conducted a hyperparameter analysis on SAT11-INDU. The results can be found at the anonymous link: https://imgur.com/a/b8C9l5G. In left figure, we analyzed the balance between reconstruction loss and two causal learning constraints (analyzing $\alpha$ and $\beta$), while keeping $\gamma=1$. In right figure, we analyzed the balance between the causal learning loss and the algorithm selection loss (analyzing $\gamma$), while keeping $\beta=1,\alpha=0.0001$. It can be seen that DAG-AS can maintain relatively good performance within a wide range of parameters. In the final version, we will supplement more detailed hyperparameter analysis.
> Other Comments & Suggestions
We sincerely appreciate your suggestions. We'll incorporate more details into the final version to highlight the significance of this area, including its practical applications in machine learning, scientific computing, and engineering optimization, as well as its superiority in enabling efficient decision-making, especially in real-time applications. In terms of CausPref, it is a typical study in which causal learning enhances robustness. The strategies of CausPref in causal preference modeling and negative sampling are worth learning. We will cite this paper in the final version and discuss the inspiration for our study.
> Q1
We apologize for the ambiguity in this part. Our intention was not to illustrate “The advantages of modelling causality over correlation” through Eq(2). Rather, we aimed to express the connection between the two modelling approaches, i.e., $P(S=1|PF,AF)$ and $P(AF|PF,S=1)$ are equivalent in terms of modelling in the algorithm selection task. However, $P(AF|PF,S=1)$ directly models the conditional distribution, which enables it to be more resilient against the marginal distribution shifts. After these discussions, we propose using a causal DAG to model $P(AF|PF,S=1)$.
> Q2
First, we would like to clarify that the process does not necessarily require iterating through all PFs. This mainly depends on the DAG learned by DAG-AS. We only need to intervene on all the parent nodes of the algorithm features. Therefore, the DAG has already reduced the dimensionality for counterfactual calculations.
However, even with this reduction, a large number of problem features may still be involved in the counterfactual calculations, so improving the efficiency of this process is essential. One potential approach is to model the interpretability calculations as a reinforcement learning system, where the action space would include the features of intervention and their magnitudes, and the reward space would be defined by the changes in the algorithm selection decision. We could take several interventions as samples to train separate Action and Reward networks. This approach may require an independent research paper for in-depth discussion. Therefore, we plan to present this future work in the final version.
---
Rebuttal Comment 1.1:
Comment: I have read the comments by the authors and have no further questions.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your support and your insightful review comments. These comments will be carefully addressed in the revised version. | Summary: The paper "Towards Robustness and Explainability of Automatic Algorithm Selection" introduces a novel approach to algorithm selection using a directed acyclic graph (DAG) to model the causal relationships between problem features and algorithm features. The proposed method, DAG-based Algorithm Selection (DAG-AS), aims to enhance robustness against distribution shifts and improve explainability at both the model and instance levels. The paper demonstrates the effectiveness of DAG-AS through experiments on the ASlib benchmark, highlighting its advantages in terms of accuracy, robustness, and explainability.
Claims And Evidence: The paper claims that DAG-AS improves robustness and explainability in algorithm selection tasks. The evidence provided includes experimental results showing that DAG-AS outperforms traditional methods in terms of PAR10 scores across multiple datasets. The paper also presents causal graphs and counterfactual explanations to support the claim of enhanced explainability.
Methods And Evaluation Criteria: The authors employ a causal learning framework using DAGs to model the relationships between problem and algorithm features. The evaluation criteria include the PAR10 score, which measures the performance of algorithm selection methods based on solution time and timeouts. The authors compare DAG-AS with several established methods and baselines, including ISAC, MCC, SATzilla11, SNNAP, SUNNY, VBS, and SBS.
Theoretical Claims: The paper theoretically claims that modeling the conditional distribution of algorithm features based on problem features using a DAG can improve robustness against distribution shifts and provide multi-level explainability. The authors support this claim with a theorem (Theorem 2.3) that establishes the feasibility of using causal models for algorithm selection tasks.
Experimental Designs Or Analyses: The experimental design includes performance comparisons, ablation studies, and robustness tests against distribution shifts. The authors use ten ASlib datasets with algorithm features and conduct experiments to evaluate the performance, robustness, and explainability of DAG-AS. The experiments are repeated multiple times to ensure reliability, and the results are presented in terms of PAR10 scores and causal graph analyses.
Supplementary Material: N/A
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: +
The paper introduces a novel approach to algorithm selection that leverages causal learning, which is a relatively unexplored area in this field.
The use of DAGs provides a clear and interpretable framework for understanding the relationships between problem and algorithm features.
The experimental results demonstrate the effectiveness of DAG-AS in improving robustness and explainability.
-
The paper's performance on certain datasets, such as GLUHACK-18 and SAT03-16-INDU, is not as strong as on others, indicating potential limitations in capturing causal relationships with limited training data.
The complexity of the causal learning framework may pose challenges for practical implementation and scalability.
Other Comments Or Suggestions: The authors could provide more details on the computational complexity and scalability of the DAG-AS framework, especially for large-scale datasets.
It would be beneficial to explore the potential of DAG-AS in other domains beyond the ASlib benchmark to assess its generalizability.
Questions For Authors: How does the computational complexity of DAG-AS compare to traditional algorithm selection methods, and what are the implications for scalability?
Can DAG-AS be applied to other domains or types of problems beyond those included in the ASlib benchmark?
How does the choice of problem and algorithm features impact the performance and explainability of DAG-AS?
Are there any specific types of distribution shifts or problem characteristics where DAG-AS may not perform well?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: > The paper's performance on GLUHACK-18 and SAT03-16-INDU, is not as strong as on others, indicating potential limitations ...
We acknowledge that DAG-AS did not perform well on GLUHACK-18 and SAT03-16-INDU. However, it is unjustified to undermine the significance of DAG-AS merely based on its performance on these 2 datasets. Specifically,
1. It is challenging for any single method to achieve optimal performance across all problems. In this paper, we conducted comprehensive evaluation on 10 datasets. The results demonstrate that DAG-AS outperformed all the comparison methods on 8 of these data. Notably, on half of them, DAG-AS significantly outperformed other methods, which illustrates the distinct advantages of DAG-AS.
2. Regarding the DAG-AS's performance on GLUHACK-18 and SAT03-16-INDU, it may be due to the data violating the causal sufficiency assumption or the overly complex causal relations within the data. Nevertheless, these issues are not insurmountable in the causal learning field. There are numerous models specifically designed for complex causality in the causal learning domain, such as models for handling multivariate/non-linear causality, and models for scenarios where the causal sufficiency assumption is violated. The causal learning module of DAG-AS is a relatively simple MLP, mainly to highlight the impact of introducing causality into algorithm selection (AS). If further performance improvement is desired, more advanced causal learning models can be employed to replace Eq(8,9).
> Computational complexity of DAG-AS
There is no significant difference in computational complexity between DAG-AS and existing methods. Compared to traditional methods, DAG-AS needs to construct a causal DAG and reconstruct algorithm features, while traditional methods often directly predict the optimal algorithm. However, due to the sparsity of the DAG, the reconstruction model for each algorithm feature is actually small-scale. We have provided the running times comparison at the anonymous link: https://imgur.com/a/1w6eLIQ. It can be seen that the running time of DAG-AS is comparable to existing methods. Despite this, DAG-AS achieves remarkable performance gains, indicating its practicability.
> Can DAG-AS be applied to other domains or types of problems beyond those included in the ASlib benchmark?
DAG-AS is designed to be applicable to a wide range of AS scenarios, far beyond the scenarios in the ASlib benchmark. The key requirement for applying DAG-AS is the availability of the general configuration for the AS task, including problem features, algorithm features, and performance data. As long as these elements are provided, DAG-AS can be effectively utilized. Actually, the ASlib itself is also composed of problems from diverse domains.
> How does the choice of problem and algorithm features impact the performance and explainability of DAG-AS?
*Performance Impact*: Different features carry distinct information about the nature of the problem and algorithm. In datasets with a rich set of features that are informative and highly relevant to the algorithm's performance, DAG-AS can better capture the causal relationships. This is similar for other AS methods. Traditional algorithms aim for these features to be correlated with the algorithm's performance, while DAG-AS expects there to be causal relationships behind such correlations.
*Explainability Impact*: When it comes to the impact on explainability, the nature of features matters depending on the user's goals. If users want to understand the decision-making logic of DAG-AS and interpret the causal relationships between features in the DAG, these features should have physical meaning, rather than being representations extracted by deep networks. However, if the sole expectation is for DAG-AS to make AS decisions, features without clear physical meaning can also be perfectly suitable. For example, since ASlib did not provide algorithm features for the BNSL-2016, we used a LLM to extract representations from the code of candidate algorithms. Despite the lack of obvious physical meaning in these features, DAG-AS still achieved the best performance.
> Are there any specific types of distribution shifts or problem characteristics where DAG-AS may not perform well?
The causal learning module in DAG-AS is essentially built on modeling conditional probabilities. Hence, it can mitigate the impact of covariate shift and prior probability shift. It has been verified in Fig.6 that DAG-AS can handle distribution shifts on problem features and optimal algorithm distributions well. However, DAG-AS may face challenges in dealing with shifts in $P(AF|PF)$. But in AS, $P(AF|PF)$ implies what characteristics an algorithm needs to solve a problem with specific feature values, which does not shift in a given AS task. As for problem characteristics, DAG-AS has no specific requirements regarding the problem domain, as mentioned in previous responses. | null | null | null | null | null | null |
Adaptive Multi-prompt Contrastive Network for Few-shot Out-of-distribution Detection | Accept (spotlight poster) | Summary: This paper proposes the Adaptive Multi-prompt Contrastive Network for few-shot out-of-distribution detection, aiming to improve OOD detection performance when only limited labeled in-distribution samples are available. The method introduces adaptive prompts to learn class distributions and enhance the separation between ID and OOD samples. It includes three main modules: Adaptive Prompt Generation, Prompt-based Multi-diversity Distribution Learning, and Prompt-guided OOD Detection. These modules address challenges such as background bias and diverse class distributions.
Claims And Evidence: The claims made in the submission are generally well-supported by clear and convincing evidence. The authors propose the Adaptive Multi-prompt Contrastive Network to address the challenges of few-shot out-of-distribution detection and claim that it outperforms existing methods. This claim is substantiated through extensive experiments on multiple benchmark datasets, including ImageNet-1k as the in-distribution dataset and various OOD datasets such as iNaturalist, Places, TEXTURE, and SUN. The results demonstrate significant improvements in key metrics like FPR95 and AUROC across different few-shot settings, with notable reductions in false positive rates and enhanced detection accuracy.
Methods And Evaluation Criteria: The authors introduce the Adaptive Multi-prompt Contrastive Network, which incorporates adaptive prompts and a multi-prompt contrastive learning framework to address the challenges of limited labeled in-distribution samples and the absence of OOD samples during training. This approach is innovative and tailored to the specific difficulties of few-shot OOD detection, such as the need to generalize from limited data and handle diverse class distributions. The evaluation criteria used in the paper, including FPR95, AUROC, and classification accuracy, are standard and appropriate for assessing OOD detection performance.
Theoretical Claims: No issues were found in the proof. The paper includes a theoretical claim regarding the projection of all feature vectors onto the unit hyper-sphere for cross-modal matching, which is essential for the multi-modal contrastive loss mechanism. This claim is supported by a detailed proof in Appendix A. The proof demonstrates that the L2 normalization applied to both image and text feature vectors ensures that all features lie on the surface of the unit hyper-sphere, thereby enforcing the use of cosine similarity as a dot product in the contrastive loss. The steps of the proof are logically structured and correctly derived, confirming that the features are constrained to the unit hyper-sphere as claimed.
Experimental Designs Or Analyses: The authors conducted extensive experiments across multiple benchmark datasets, including ImageNet-1k as the in-distribution dataset and various OOD datasets such as iNaturalist, Places, TEXTURE, and SUN. These datasets are widely recognized in the field and provide a diverse range of scenarios to test the robustness of the proposed method.
The experimental setup includes comparisons with several state-of-the-art methods, covering fully supervised, zero-shot, one-shot, and eight-shot settings. This comprehensive comparison allows for a clear evaluation of AMCN's performance relative to existing approaches under different conditions. The authors also provided detailed ablation studies to analyze the contributions of individual components of their framework, such as different prompt types (LIP, LFOP, LAOP) and the impact of adaptive thresholding.
Supplementary Material: Yes, I review the code of Adaptive Multi-prompt Contrastive Network.
Relation To Broader Scientific Literature: The key contributions of this paper are well-situated within the broader scientific literature on out-of-distribution detection and few-shot learning, building upon and advancing several important concepts in these fields. The proposed AMCN leverages adaptive prompts and contrastive learning, which are both active areas of research in machine learning. The idea of using prompts to guide model learning is inspired by recent advancements in natural language processing, particularly in the context of prompting large language models for downstream tasks. This paper extends the concept of prompt learning to the domain of OOD detection, specifically targeting the challenging few-shot scenario where only limited labeled in-distribution samples are available. This approach is novel and addresses a significant gap in the literature, as most existing OOD detection methods rely on large amounts of labeled data and do not account for the diversity and scarcity of samples in few-shot settings.
Essential References Not Discussed: There are no essential related works missing from the citations.
Other Strengths And Weaknesses: Strengths:
1、 The paper introduces a novel approach combining adaptive prompts and contrastive learning, effectively addressing the challenges of few-shot OOD detection with limited labeled data.
2、 Extensive experiments and ablation studies demonstrate the method's robustness and effectiveness across multiple datasets and few-shot settings.
Weaknesses:
1、 Figure 1 is not referenced or discussed in the main text. The caption of Figure 7 contains a error: "right" and "left" are mistakenly reversed.
2、 The paper heavily relies on CLIP for feature extraction and prompt engineering. While this is a common choice, it raises the question of whether using alternative models for feature extraction could yield different results.
3、 The paper does not discuss the computational complexity or efficiency of the proposed method in comparison to existing methods.
Other Comments Or Suggestions: See Weakness.
Questions For Authors: See Weakness.
Ethical Review Flag: Flag this paper for an ethics review.
Ethics Expertise Needed: ['Other expertise']
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer oFtr for the valuable comments and provide the following detailed responses to all weaknesses.
---
### **W1: Figure 1 is not referenced or discussed in the main text. The caption of Figure 7 contains a error: "right" and "left" are mistakenly reversed.**
Thanks a lot for your valuable suggestion. Figure 1 should be referenced in Introduction. Figure 1(a) should be placed in the first paragraph of Introduction. Figure 1(b) should be placed in the second paragraph of Introduction (Line 81). Figure 1(c) should be placed in the third paragraph of Introduction (Line 96). We will revise the caption of Figure 7 in our revised version.
### **W2: The paper heavily relies on CLIP for feature extraction and prompt engineering. While this is a common choice, it raises the question of whether using alternative models for feature extraction could yield different results.**
Thanks a lot for your valuable suggestion. 1) Since all the compared methods utilize CLIP to extract features, we follow them to use CLIP for fair comparison. 2) For many other feature encoders, they include more complex calculations, which might make the designed OOD detector time-consuming. 3) The CLIP network can effectively extract the visual and textual features under the few-shot setting. Therefore, we utilize CLIP in our paper.
### **W3: The paper does not discuss the computational complexity or efficiency of the proposed method in comparison to existing methods.**
We appreciate the reviewer’s comments regarding the computational complexity and efficiency of the proposed method. Compared to other methods, the training time and memory consumption are similar:
|Method|Time for one iteration (s)|GPU Memory (MiB)|
|-|-|-|
|CoOp|0.54|20865|
|LoCoOp|0.82|24180|
|SCT|0.87|25267|
|Ours|0.80|23946|
We evaluate the time and memory consumption of our proposed method compared with other baselines in above table and the results show that our proposed method is relatively compute-efficient. The evaluation is conducted on a single GTX-4090
GPU with a batch size as 64. | Summary: Out-of-distribution (OOD) detection is an important machine learning task. Few-shot OOD detection is an important yet challenging setting in OOD detection task, where only a few labeled ID samples are available. This paper proposes a novel and clear few-shot OOD detection model that considers an interesting and practical multi-diversity setting. Authors first generate adaptive prompts for ID classification. Then, authors generate an adaptive class boundary for each class by introducing a class-wise threshold to conduct adaptive ID alignment. Finally, authors design a prompt-guided ID-OOD separation module to control the margin between ID and OOD prompts for OOD detection.
Claims And Evidence: Yes. The claims made in the submission supported by clear and convincing evidence.The experimental results can well show the effectiveness of the method.
Methods And Evaluation Criteria: Yes. The proposed multi-diversity few-shot OOD detection method is practical and novel.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes. I checked the proof in the appendix, i.e., Section A. The proof is correct and reasonable. Authors show the superiority of their method in Table 1 and Figure 4.
Supplementary Material: Yes. I reviewed the supplementary material. Codes are available, and the proposed method seems reproducible. The proposed method and uploaded code well match.
Relation To Broader Scientific Literature: This paper considers various distributions between different classes in the few-shot OOD detection task. It might provide some inspiration for the task of few-shot learning.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1. The idea is interesting. This paper considers various distributions between different classes in the few-shot OOD detection task. It might provide some inspiration for the task of few-shot learning. Besides, the proposed method is novel and sound.
2. Based on one ID dataset (mageNet-1k) and four OOD datasets (Texture, Places, SUN and iNaturalist), authors conduct many experiments to verify the effectiveness of the proposed method. The performance of the proposed method AMCN is state-of-the-art. The corresponding ablation studies show the effectiveness of three modules (adaptive prompt generation, prompt-based multi-diversity distribution learning and prompt-guided OOD detection).
3. The overall structure of the proposed method is clear and the paper is easy to follow. In the experimental section, this performance analysis is reasonable.
4. Authors provide available codes, and the proposed method seems reproducible. Besides, the proposed method and uploaded code well match.
Weaknesses:
1. A little typo in Figure 1 caption. “(c) Brief framework of our method.” should be “(d) Brief framework of our method.”
2. T-SNE visualization in Figure 3 is based on which dataset? The sentence “As shown in Figure 3, different classes have distinct diversity” should be removed to the first paragraph of Section 3.2.
3.In the paragraph above Equation (7), “Based on (7)”, is this Equation (7) or Equation (6)? Is it a typo? I think it should be Equation (6).
Other Comments Or Suggestions: It would be better if authors could provide some failure examples of few-shot OOD detection.
Questions For Authors: Please address the weakness. I am willing to raise the score
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer 8TNR for the valuable comments and provide the following detailed responses to all weaknesses.
---
### **W1: A little typo in Figure 1 caption. “(c) Brief framework of our method.” should be “(d) Brief framework of our method.”**
Thanks a lot for your valuable suggestion. We will revise Figure 1 caption in our revised version.
### **W2: T-SNE visualization in Figure 3 is based on which dataset?**
We appreciate the reviewer’s comments regarding T-SNE visualization in Figure 3. The dataset of T-SNE visualization in Figure 3 is based on the ImageNet-1k dataset.
### **W3: The sentence “As shown in Figure 3, different classes have distinct diversity” should be removed to the first paragraph of Section 3.2.**
Thank you for your constructive suggestion. We will revise it in our revised version.
### **W4: In the paragraph above Equation (7), “Based on (7)”, is this Equation (7) or Equation (6)? Is it a typo?**
Thanks a lot for your valuable comment. Yes. It should be Equation (6). We will revise it in our revised version.
### **W5: It would be better if authors could provide some failure examples of few-shot OOD detection.**
In Few shot OOD, its paramount to generate ID and OOD prompts.
Therefore, the dataset providing the labels (ID dataset) is key. It should be diverse enough.
This is the case of ImageNet1K which is used in the benchmarks.
However, if we take a more specific dataset as ID, such as PLACES, which is less diverse, then the prompt engineering is less effective:
|Shot|FPR95($\downarrow$)|AUROC($\uparrow$)|
|-|-|-|
|1|44.21|88.50|
|8|43.75|89.74|
Here, the Places dataset (less classes) is treated as ID set and the ImageNet-1k dataset as OOD set (reversed compared to the benchmarks). We can find that when we switch the Places dataset and the ImageNet-1k dataset, the performance improvement is reduced.
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses. The authors addressed all of my concerns. After considering the comments from other reviewers and authors' rebuttals, I believe that this paper is an excellent work. So I raised my ratings.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate your enthusiastic and uplifting support for our efforts! It’s a true privilege to learn that you see our work as significant and influential. Realizing that our contributions strike a chord with others in the community is deeply inspiring. Once more, we extend our sincere gratitude for your insightful feedback and steadfast encouragement. | Summary: This work introduces a new method for few-shot out-of-distribution (OOD) detection. Unlike previous approaches that largely overlook the diverse characteristics among different classes, the proposed method constructs both in-distribution (ID) and OOD prompts and designs multiple contrastive losses to learn a better separation boundary. Experiments on multiple datasets demonstrate significantly improved performance compared to existing methods.
Claims And Evidence: The claimed contributions are validated by ablation experiments.
Methods And Evaluation Criteria: The proposed method looks sound, and the evaluation metrics are proper.
Theoretical Claims: There is no theoretical proof in this work.
Experimental Designs Or Analyses: The experimental design is reasonable, and enough ablation analyses are provided.
Supplementary Material: I have read the supplementary material. It looks good in general.
Relation To Broader Scientific Literature: Out-of-distribution detection is of great interest to the wide computer vision community, it has much value in real-world scenarios, e.g., autonomous driving.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
The work is well-motivated, and a reasonable solution is presented to address the class diversity issue.
The experimental results are strong and clearly show the effectiveness of the proposed method.
Weaknesses:
Prompt learning is widely explored in the community. The idea of using learnable prompts for both ID and OOD classes is straightforward. However, the design of contrastive losses between different sets of prompts adds a meaningful contribution.
The proposed method involves a large number of hyperparameters. The authors should conduct a sensitivity analysis to understand the impact of these parameters on model performance.
In the methodology section, there is a lack of discussion regarding the rationale behind certain technical design choices. For example, it is unclear how class-wise thresholding addresses the issue of sample diversity—this should be further elaborated.
Other Comments Or Suggestions: the equation following Equation 3 is not numbered. Based on the context in the later sections, it seems to represent 𝐿1, not 𝐿𝑐.
Questions For Authors: The authors should conduct a sensitivity analysis to understand the impact of these parameters on model performance.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer 1bUn for the valuable comments and provide the following detailed responses to all weaknesses.
---
### **W1: Prompt learning is widely explored in the community. The idea of using learnable prompts for both ID and OOD classes is straightforward. However, the design of contrastive losses between different sets of prompts adds a meaningful contribution.**
We appreciate the reviewer’s comments regarding learnable prompts and contrastive losses. Firstly, we combine $P$ learnable ID prefixes and the label name to generate the learnable ID prompts (LIPs). Also, we generate $S$ labelfixed OOD prompts (LFOPs) by introducing OOD labels from other datasets that are disjoint with the ID label set. Since the introduced OOD labels are often limited, we explore $Z$ label-adaptive OOD prompts (LAOPs) for each ID prompt. Besides, we align the image features and ID prompt features by a prompt-guided contrastive loss for ID classification.
### **W2: The authors should conduct a sensitivity analysis to understand the impact of these parameters on model performance.**
Thanks a lot for your valuable suggestion. Due to the page limitation, we have analyzed some significant parameter in Figure 5.
We are using hyperparameters $\tau_0, \ldots, \tau_5$ in the tool. We are reporting here the sensitivity analysis towards these hyperparameters on of the benchmark SUN, in the eight-shot setting:
|$\tau_0$|FPR95($\downarrow$)|AUROC($\uparrow$)|
|-|-|-|
|0.1|23.46|95.71|
|**0.2**|**23.17**|**95.89**|
|0.3|23.32|95.82|
|$\tau_1$|FPR95($\downarrow$)|AUROC($\uparrow$)|
|-|-|-|
|0.40|23.32|95.83|
|**0.45**|**23.17**|**95.89**|
|0.50|23.53|95.74|
|$\tau_2$|FPR95($\downarrow$)|AUROC($\uparrow$)|
|-|-|-|
|0.4|23.29|95.72|
|**0.5**|**23.17**|**95.89**|
|0.6|23.35|95.83|
|$\tau_3$|FPR95($\downarrow$)|AUROC($\uparrow$)|
|-|-|-|
|0.50|23.40|95.58|
|**0.55**|**23.17**|**95.89**|
|0.60|23.29|95.74|
|$\lambda$|FPR95($\downarrow$)|AUROC($\uparrow$)|
|-|-|-|
|0.2|23.98|94.35|
|**0.3**|**23.17**|**95.89**|
|0.4|23.67|95.12|
### **W3: In the methodology section, there is a lack of discussion regarding the rationale behind certain technical design choices. For example, it is unclear how class-wise thresholding addresses the issue of sample diversity—this should be further elaborated.**
We also appreciate the reviewer’s comments regarding the class-wise threshold. In the fixed setting, we do not update the threshold during training. Since there are different diversities in different classes, we want to learn different thresholds for different classes. Different diversities correspond to different distributions. In Section 3.3, we learn the distribution of each class in "Learning distribution" to obtain the corresponding diversity information. In "Intra-class distribution normalization", to fully learn the intra-class distribution of ID samples for better classification, we independently normalize the distribution for each class by $L_I^1$. In "Inter-class distribution normalization", we balance the distributions of all the classes by $L_I^2$. By learning the class-wise threshold and data distributions (intra-class distribution and inter-class distribution), we can obtain an adaptive classification decision boundary for each class, which effectively reduces the negative impact of different diversities on the challenging few-shot OOD detection task.
### **W4: the equation following Equation 3 is not numbered. Based on the context in the later sections, it seems to represent $L_1$, not $L_C$.**
We thank the reviewer for the insightful suggestions.We will add the equation number in the final version. In fact, it is $L_C$, the $L_1$ is defined in Eq. (9).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their comprehensive and detailed responses to my questions. After reading their rebuttal, I believe the authors have addressed all the questions I raised. I believe the manuscript meets the acceptance standard, and I will maintain my score. | Summary: This paper proposes to address the novel and challenging task, multi-diversity few-shot OOD detection. Unlike the previous methods that ignore the distinct diversity between different classes in the few-shot OOD detection task, this paper presents a novel network AMCN. The proposed method first transposes ID prompts into OOD prompts by semantically concatenating ID prompts with OOD suffixes. The semantic concatenation can generate many negative prompts to guide prompt learning in the few-shot OOD detection task with multi-diversity setting. Finally, the paper develops an ID-OOD separation module to control the margin between ID and OOD prompt features by a carefully-designed loss.
Claims And Evidence: The viewpoints are well verified. Specifically, the paper presents three modules to handle multi-diversity few-shot OOD detection. In the ablation study, this paper provides many ablation results to support the claims in Methodology.
Methods And Evaluation Criteria: Yes. Both the proposed method and evaluation criteria make sense for the problem of few-shot OOD detection at hand.
Theoretical Claims: Yes, I have checked the correctness for theoretical claims. This paper clearly presents the motivation of the proposed multi-diversity few-shot OOD detection setting. Three modules in Figure 2 and the Methodology section make sense for solving three challenges: ID classification, adaptive ID alignment, and OOD detection.
Experimental Designs Or Analyses: Yes, I have checked the soundness of any experimental designs and analyses.
Supplementary Material: This submission provides corresponding codes to help me understand the presented method. Codes are available and available.
Relation To Broader Scientific Literature: Authors make some progress and promote the field of few-shot OOD detection. Especially, authors solve the significant challenges about the distinct diversity between different classes. The experiment section demonstrates impressive performance of the proposed method.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. This paper proposes an expressive and novel Adaptive Multi-prompt Contrastive Network, especially in the few-shot OOD detection task. This paper utilizes a certain number of images with different diversity from each class for training, and conduct OOD detection on the whole testing dataset. Different from previous works that only learn ID prompts for training, this paper constructs ID and OOD prompts for each class to fully understand the images. The setting is very interesting, and the proposed method makes sense.
2. The proposed method is effective. This paper first generates adaptive prompts (learnable ID prompts, label-fixed OOD prompts and label-adaptive OOD prompts). Also, this paper learns an adaptive class boundary for each class by introducing a class-wise threshold. Finally, this paper proposes a prompt-guided ID-OOD separation module to control the margin between ID and OOD prompts.
3. Extensive comparisons with state-of-the-arts are conduct in the experiment section. Corresponding results demonstrate that the proposed approach can achieve significant performance on four benchmarks under the few-shot OOD detection setting.
4. The motivation of exploring different distributions between different classes in the few-shot OOD detection task and the proposed method are impressive. It is very enlightening to the research community and will inspire more future research. The paper is well-prepared and easy to understand.
Weaknesses:
1. Authors should move the definition of c (“$c \in \{1,...,C\}$ is the corresponding class of $f_x^i$”) to the third paragraph of Section 3.2.
2. In Eq. (7), $O \cdot \mathcal{M}_c^{pse}(t)$ should be $O \cdot \mathcal{M}_c^{pse}(t-1)$?
3. In Table 4, what is the fixed threshold?
Other Comments Or Suggestions: In Figure 4 and Figure 7, the sub-figures are too crowded.
Questions For Authors: I don't have any other specific questions. Please answer the questions in the weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer J7Ff for the valuable comments and provide the following detailed responses to all weaknesses.
---
### **W1: Authors should move the definition of c to the third paragraph of Section 3.2.**
Thanks a lot for your valuable suggestion. We will revise it in our revised version.
### **W2: In Eq. (7), $O \cdot \mathcal{M}_c^{pse}(t)$ should be $O \cdot \mathcal{M}_c^{pse}(t-1)$?**
We thank the reviewer for the insightful suggestions. Yes. It should be $O \cdot \mathcal{M}_c^{pse}(t-1)$. We will revise it in our revised version.
### **W3: In Table 4, what is the fixed threshold?**
We appreciate the reviewer’s comments regarding the fixed threshold in Table 4. It is the initial threshold. In the fixed setting, we do not update the threshold during training. Since there are different diversities in different classes, we want to learn different thresholds for different classes. Different diversities correspond to different distributions. In Section 3.3, we learn the distribution of each class in "Learning distribution" to obtain the corresponding diversity information. In "Intra-class distribution normalization", to fully learn the intra-class distribution of ID samples for better classification, we independently normalize the distribution for each class by $L_I^1$. In "Inter-class distribution normalization", we balance the distributions of all the classes by $L_I^2$.
### **W4: In Figure 4 and Figure 7, the sub-figures are too crowded.**
We thank the reviewer for the insightful suggestions. We will redesign them in our final version. | null | null | null | null | null | null |
Policy Regularization on Globally Accessible States in Cross-Dynamics Reinforcement Learning | Accept (spotlight poster) | Summary: This paper proposes Accessible State Oriented Policy Regularization (ASOR), a reward shaping technique for offline and online, off-policy RL with dynamic shift. The work is inspired by the dynamic-agnostic state distribution matching in Imitation from Observation (IfO), but points out that naively imitating expert states could be suboptimal with changed dynamics. Thus, the paper focuses on expert states that are always accessible throughout dynamic changes ("globally accessible"), and tries to maximize reward in RL with a constraint of state occupancy difference ($\mathcal{F}$-distance) between state occupancy on all states and state occupancy on globally accessible states. With Lagrangian derivation, the final implementation is to train a GAN discriminating states with high values or high proxy visitation counts (by RND) with other states, and add a reward shaping term from the GAN onto the original reward. On several offline RL and online RL environments, the proposed method is proved to outperform baselines.
## Update After Rebuttal
I have updated my score accordingly during the author-discussion period; no further update.
Claims And Evidence: Yes in general. This paper made several claims: empirically, the paper claims that matching expert states similar to IfO is a good way to learn cross-dynamics RL, but naively matching expert states without considering accessibility is bad; theoretically, the paper claims that one can convert the problem into a constrained optimization with $\mathcal{F}$-distance limited between state occupancies of the policy and the policy on globally accessible states, and the framework has several theoretical guarantees. With Lagrangian multiplier, the objective can be turned into (as a surrogate) an unconstrained one and optimized with a GAN. I agree with most of the claims, except that:
1. not all IfOs are ignorant to dynamic changes;
2. there are cases in cross-dynamics RL where globally accessible states are empty sets which this method does not seem to apply.
See "methods and evluation criteria" for details.
Methods And Evaluation Criteria: Yes. Overall, the proposed method make sense for the motivation proposed by the paper that one should follow states that are successful and visited by different dynamics. However, there is a concern remaining: The globally accessible state might be an empty set in cross-dynamics RL. This is common in cross-embodiment learning; for example, we want to learn a walking policy for robots with either normal legs or crippled legs, and with different heights. in this case, as the robots may never perfectly achieve the same status, the globally accessible state will become an empty set. Such scenario is also considered in some IfO papers which this paper overlooked [1, 2], and thus the statement "only HIDIL considered state distribution mismatch across different dynamics (in IfO) ..." does not hold.
**References**
[1] Y. J. Ma et al. Versatile Offline Imitation from Observations and Examples via Regularized State-Occupancy Matching. In ICML, 2022.
[2] K. Yan et al. A Simple Solution for Offline Imitation from Observations and Examples with Possibly Incomplete Trajectories. In NeurIPS, 2023.
Theoretical Claims: There are two theoretical claims in this paper, which are about finite and infinite sample analysis of the proposed method. I briefly checked both of them and they look sensible and correct to me.
Experimental Designs Or Analyses: Yes, and I found the results to be sufficient to prove the effectiveness of the proposed method. The empirical success and detailed analysis on visual environments Is particularly convincing. However:
1. Even if many methods are tested, I still feel some IfO methods are missing; for example, the IfO papers I mentioned in "Methods and Evaluation Criteria" also considers cross-dynamics generalization.
2. The authors claim in Sec. 4.1 that "IfO approaches have the worst performance because they ignore the reward information ..." . However, there is an easy fix (which is widely used in decision transformer papers [1] as "10%BC") to make use of the reward labels: one can simply select the few trajectories with the highest reward as "expert trajectory", and the rest as "auxiliary unlabeled data" (or simply discard them). There are many IfO papers that can learn from a few expert trajectories with auxiliary unlabeled data, such as SMODICE [2], LobsDICE [3], TAILO [4], MAHALO [5] (which can also do offline RL), etc.
3. Some baselines are a bit out-of-date. For example, CQL is usually considered to be worse than IQL [6], which is a more recognized offline RL algorithm. There are also other newer methods such as XQL [7].
**References**
[1] L. Chen et al. Decision Transformer: Reinforcement Learning via Sequence Modeling. In NeurIPS, 2021.
[2] Y. J. Ma et al. Versatile Offline Imitation from Observations and Examples via Regularized State-Occupancy Matching. In ICML, 2023.
[3] G-H Kim et al. LobsDICE: Offline Learning from Observation via Stationary Distribution Correction Estimation. in NeuriPS, 2023.
[4] K. Yan et al. A Simple Solution for Offline Imitation from Observations and Examples with Possibly Incomplete Trajectories. In NeurIPS, 2023.
[5] A. Li et al. MAHALO: Unifying Offline Reinforcement Learning and Imitation Learning from Observations. In ICML, 2023.
[6] I. Kostrikov et al. Offline Reinforcement Learning with Implicit Q-Learning. In ICLR, 2022.
[7] D. Garg et al. Extreme Q-Learning: MaxEnt RL Without Entropy. In ICLR, 2023.
Supplementary Material: I checked the whole Appendix. It contains derivations, proof and discussion of the theorems, intuitive comparison to and discussion on the limitation of the prior works, and experiment details. I found the appendix to be very helpful and vital for supporting the theoretical claims of the paper. i would suggest the authors to clarify in Appendix A that instead of applying KKT conditions to the Lagrangian function, the proposed method chooses a value for $\lambda$. There is no other supplementary material.
Relation To Broader Scientific Literature: This paper is beneficial to the Reinforcement Learning (RL) / Imitation Learning (IL) community, and potentially helpful for the robotics and computer vision community (as it contains visual environments). It does not affect much to the scientific literature beyond the communities above.
Essential References Not Discussed: As mentioned in the "Methods and Evaluation Criteria" section, I feel some IfO works are missing, and the authors should also discuss cross-embodiment works where the globally accessible states are empty sets.
Other Strengths And Weaknesses: **Other Strengths**
1. The paper is overall well-written and easy to follow; the motivation from IfO that "one should focus on expert states, but not all expert states" seems natural.
**Other Weaknesses**
1. The method seems to have many moving parts. For this method, one need to train a RND for proxy of visitation frequency, a GAN, a context encoder (from ESCP), an actor and a critic together. The training stability of this method could potentially be fragile. The authors mention this in Fig. 3, but the augmented reward seems to be very close to 0, which contradicts with the left hand side of Fig. 3 where augmented reward has a very large absolute value. Can the authors explain this?
Other Comments Or Suggestions: 1. the title of Sec. 2, "Backgroud" -> "Background";
2. In this paper, $\lambda$ is used as the Lagrange multiplier, regularization coefficient, and Lipschitz coefficient in the same time. This is confusing, as i cannot find the value of $\lambda$ regularization coefficient in this paper (there is an ablation in Tab. 2, but I am unable to find the the value adopted in the main results). The $\lambda$ in Tab. 4 seems to refer to the Lipschitz coefficient.
3. I would suggest the authors to also provide the procedure of ASOR+MAPLE in the appendix besides ASOR+ESCP.
Questions For Authors: I have a question: how many GPUs does the training use? The authors claim that "The training was conducted using NVIDIA TESLA V100 GPUs and takes around 20 hours to train 6M steps", but did not specify the number of GPUs.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive and insightful comments. Responses are provided as follows. The linked file is available [here](https://anonymous.4open.science/api/repo/ICML_ASOR_Rebuttal-01CB/file/rebuttal_append_file.pdf).
**Q1: Possibly empty accessible state set**
A: Tasks with no globally accessible states are indeed extreme cases where ASOR may degrade into the base RL algorithm. But two practical factors can help alleviate such concern.
- One may regard the state dimension that remains distinct across dynamics as hidden environment parameter and do not include such information when identifying globally accessible states. For example, the robot height in the mentioned example can be excluded in the state space. States that are visited by expert policies with different robot height, e.g., smooth walking with moderate speed, can be regarded as globally accessible and preferred when training new policies.
- In the practical algorithm procedure of ASOR, a relative accessibility detection mechanism is applied. States that are most likely to be accessible in a training batch are regarded as preferable (Line 7 in Algorithm 1). We can rely on the discriminator network to automatically find appropriate states to assign higher rewards (Line 8 in Algorithm 1).
Meanwhile, the mentioned robotics scenario with different robot leg conditions can also be challenging for base RL algorithms which ASOR is built on, such as cross-domain RL methods. A considerable amount of training data may be needed to train decent policies in such scenarios.
**Q2: Overlooked IfO papers**
A: Thanks for bringing these papers into our attention. We will add discussions in the related work of the revised paper. We also agree with the reviewer that we should give IfO papers [1,2] enough credit for also considering state distribution mismatch. IfO methods are helpful when enough near-expert trajectories or unlabeled data are available and have better training efficiency. The advantage of ASOR lies in that its reward augmentation mechanism can be effectively integrated with both offline and online RL methods and adds minimal changes to their original training pipeline.
**Q3: Additional baselines**
A: Thanks for introducing alternative ways of constructing baselines. We refer the reviewer to Tab. 3 of the linked page, where comparative analysis on SOIL with 10% data and IQL is included. SOIL indeed benefits from the 10% data fix and shows increased performance, while IQL has similar performance with CQL. Both approaches cannot outperform MAPLE+ASOR. This can be because the offline datasets do not have enough state coverage for SOIL to imitate. Meanwhile, IQL and CQL still focus on state-action joint distributions in the offline dataset. Such distribution can be unreliable under dynamics shift since given the same state, the optimal action may be different in different dynamics. We are unable to reimplement the mentioned DICE-related methods due to the limited rebuttal time and will add them in the revision.
**Q4: Potentially fragile training stability**
A: Both the RND module and the GAN module are essentially supervised learning on given datasets. Their training will be more stable than the actor-critic module that involves RL loss. The red loss curve in Fig. 3 (right) also indicates that the RND loss and the GAN discriminator loss drop smoothly, indicating a relatively stable training process. One might relate the GAN module to the unstable training of GANs. But in fact, the GAN module only utilizes a GAN-like objective function. Its training procedure is different from GAN in that the data to train the discriminator is directly constructed from the replay buffer, instead of being adversarially generated.
**Q5: Contradicting augmented reward**
A: Fig. 3 (left) demonstrates the augmented reward of a single step, while in Fig. 3 (right) we show rewards averaged in a batch and multiplied by the coefficient. The extreme values of the augmented reward only exist when the walker agent is about to fall down near the end of the trajectory, so they only make up a small fraction of a training batch. The average augmented reward will therefore remain relatively stable during training.
**Q6: Repeated $\lambda$**
A: Thanks for pointing out this issue. We will change the Lipschitz coefficient to $\mu$ in the revision. The Lagrange multiplier $\lambda$ is the the same with the coefficient for the augmented reward according to Eq. (5), so we remain the same notation. For the values of $\lambda$ in different experiments, we refer the reviewer to Tab.1 of the linked pdf page.
**Q7: Number of GPUs**
A: We only use one V100 GPU for centralized training in Ray. Distributed environment rollouts are carried out with ~300 CPU threads.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reviewer's detailed response. I do not have any other concern, and hopefully the authors can modify the paper accordingly. I shall now raise my score from 3 to 4.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's prompt and positive response. We thank the reviewer for all the time and efforts in reviewing this paper and will update the paper as required. | Summary: The paper addresses the challenge of learning policies when the environment dynamics vary such that expert state trajectories may not always be accessible under dynamics shifts. To overcome this, the authors present the Globally Accessible States with a formal definition and the F-Distance, a measure of the discrepancy between the accessible state distributions of the current and expert policies. Building on these concepts, they present the ASOR Algorithm, which integrates these ideas as a reward augmentation module. Designed as a plug-and-play component, ASOR can be applied to both online and offline RL approaches.
The method is evaluated across diverse benchmarks, including MuJoCo, MetaDrive, and a large-scale Fall Guys-like environment. Combining theoretical insights with extensive empirical validation, the paper demonstrates that regularizing policies based on accessible states enhances robustness across dynamic shifts.
Claims And Evidence: Most of the paper’s claims are supported by a mix of rigorous theoretical analysis and extensive empirical experiments.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are generally well-aligned with the problem of cross-dynamics reinforcement learning. The core idea of focusing on globally accessible states addresses a real issue in cross-dynamics scenarios. The method directly targets the shortcomings of traditional IfO approaches by regularizing policies to only mimic these reliable states. The theoretical framework (using F-distance with both JS divergence and neural network distance) is solid, even though it relies on certain assumptions that might limit its application in highly adversarial settings.
For the evaluation criteria, the paper evaluates its methods on a variety of benchmark datasets and environments, including MuJoCo, Minigrid, MetaDrive, and a large-scale Fall Guys-like game. These benchmarks cover a range of tasks (both online and offline) and dynamics shifts, providing a comprehensive test bed that is well-suited to assess the proposed method’s performance.
Theoretical Claims: I examined several of the theoretical proofs provided in the paper, particularly those supporting the performance lower bounds (Theorem 3.5 and Theorem 3.7).
Experimental Designs Or Analyses: I examined several aspects of the experimental designs and analyses presented in the paper. Overall, while the chosen benchmarks and metrics are appropriate for cross-dynamics RL, some issues remain on certain experiments. Addressing these problems would strengthen the evaluation of the results:
- Some experimental details, such as the exact procedures for constructing state partitions (DP and DQ) and the calibration of pseudo-counts, are described briefly. This might make it harder for a reader to reproduce the results or fully understand the sensitivity of the approach to these choices.
- Although the authors have provided ablation studies in Table 2, I remain unconvinced of ASOR's robustness to the reward coefficient and state partitioning hyperparameters. First, in the first and third columns, the average result (0.34) is lower than that of the original MAPLE (0.36). The authors should provide more detailed results to illustrate the sensitivity of a broader range of hyperparameter choices. Second, in the final column, it is unclear whether the results across different domains were obtained using the same set of hyperparameters. If not, how were the hyperparameters tuned for each domain? Lastly, perhaps I overlooked it, but I could not find the exact values of the final chosen hyperparameters.
- For the online RL experiments in Figure 2, the authors are encouraged to follow the experimental setups in the SRPO paper more closely, ensuring alignment in iteration numbers and experimental domains. This would help readers better assess whether the proposed method is fairly compared with SRPO. Additionally, in HalfCheetah, ASOR shows only marginal improvements over the baselines. Is there a fundamental reason for its reduced effectiveness in this setting? If the limitation stems from state accessibility not changing significantly in HalfCheetah, this would suggest that ASOR is most beneficial in environments where accessibility varies.
- Regarding the experiments in MetaDrive, the training process appears to be unstable and not fully converged for the compared methods. Could the authors include results with additional training steps for better convergence?
- I find it unclear why the authors describe the Fall Guys-Like Game experiments as large-scale, given that no visual inputs are used and the action space appears to be small. It remains uncertain whether the proposed method would be effective in high-dimensional state and action spaces, such as humanoid or environments with visual observations. Could the authors clarify these points?
Supplementary Material: I have reviewed the proofs, experiment details, model details, and training setups in the appendices.
Relation To Broader Scientific Literature: The key contributions of the paper relate to multiple areas within reinforcement learning (RL), particularly in cross-dynamics policy transfer, imitation learning from observation (IfO), and policy regularization. For cross-dynamics RL, it extends SRPO by redefining state accessibility and improving transfer in dynamic environments.
Essential References Not Discussed: The paper builds on SRPO (Xue et al., 2023a) but does not cite works on meta-learning for RL adaptation, such as MAML (Finn et al., 2017), which are highly relevant in learning adaptable policies across diverse dynamics.
Other Strengths And Weaknesses: Strengths:
- The paper identifies a critical limitation in many IfO approaches—the assumption of identical expert state distributions across dynamics—which is often violated in real-world scenarios. This insight is both timely and relevant for cross-dynamics RL.
- The derivation of performance bounds using two different instantiations (JS divergence and network distance) provides solid theoretical grounding.
- ASOR is designed as a modular add-on that can be integrated with existing state-of-the-art RL algorithms. Its practical implementation via a GAN-like objective for reward augmentation makes it appealing for both online and offline settings.
Other weaknesses:
- While the paper is well-structured overall, some sections (especially those covering theoretical analyses and the accessible states) could be improved with additional intuitive explanations or visual aids. This would help the readers better understand the paper.
Other Comments Or Suggestions: For the methodology part, the construction of DP and DQ datasets (used for state accessibility filtering) is described briefly but is critical to the method. Adding more details on how the partitioning is implemented and its computational overhead would strengthen reproducibility. Besides, the construction of DP and DQ datasets (used for state accessibility filtering) is described briefly but is critical to the method. Adding more details on how the partitioning is implemented and its computational overhead would strengthen reproducibility.
Questions For Authors: 1. In Theorem 3.5 and Theorem 3.7, the performance lower bounds rely on the assumption that the HiP-MDP’s MDPs are M-Rs accessible from each other. How restrictive is this assumption in practice? If it fails in many real-world cases (e.g., highly stochastic environments or POMDPs with high-dimensional observations such as visual inputs), then the practical significance of the theoretical guarantees would be weakened.
2. In some cases, the globally accessible states between the current environment and the expert trajectories may not share the same optimal policy. How do the authors account for this situation? Could ASOR have a negative effect in such cases?
3. How stable is the discriminator training for F-distance estimation? Are there cases where the optimization struggles (e.g., mode collapse, vanishing gradients)? Since GAN training can be unstable, an analysis of failure cases or stability techniques (e.g., spectral normalization, gradient penalties) would be valuable. If the discriminator fails in some environments, addressing this limitation (or providing mitigation strategies) would be important for practical applications.
4. Please see our questions in the "Experimental Designs Or Analyses" section above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive and insightful comments. Responses are provided as follows. The linked file is available [here](https://anonymous.4open.science/api/repo/ICML_ASOR_Rebuttal-01CB/file/rebuttal_append_file.pdf).
**Q1: Robustness to hyperparams**
A: Thanks for mentioning the hyperparameter table which is indeed missing in the original paper. We refer the reviewer to Tab. 1 and Tab. 2 in the linked file for detailed hyperparameters. In offline RL, we tried two values of $\lambda$: 0.1 and 0.3. For datasets with higher level of optimality, $\lambda=0.1$ is preferred so that less policy regularization is exerted. $\lambda=0.3$ is preferred on datasets with higher randomness to add more policy regularization based on optimal accessible state distributions. $\rho_1$ and $\rho_2$ can be similarly chosen based on offline data optimality with possible values (0.5, 0.5) and (0.3, 0.3). In medium-expert datasets, states are more likely to be sampled from the optimal accessible state distribution, so higher values $\rho_1$, $\rho_2$ are chosen. In online RL, $\lambda$ is set to 0.03 in MuJoCo environments and 0.1 in the Fall Guys-like environment, mainly to fit the reward scale of different RL environments. $\rho_1$, $\rho_2$ are kept to 0.5 in online RL experiments.
In Tab. 2, a fixed $\lambda=0.1$ has poor performance compared with MAPLE. This is because the augmented reward adds additional variance to environment reward. In some datasets its scale may not be large enough to exert reasonable policy regularization, so the reward variance dominates the training process and leads to a performance drop. Meanwhile, random partition denotes that the real data and fake data to train the GAN-like discriminator in Algorithm 1 are randomly chosen from the training batch. Its poor performance compared with MAPLE demonstrates the importance of constructing discriminator training data according to the accessible state distributions.
In Tab. 4 of the linked file, we conduct additional hyperparameter tuning where $\lambda$ ranges from 0.1 to 0.4 and $\rho_1,\rho_2$ equal to 0.3, 0.5, and 0.7. The results demonstrate that ASOR's performance is relatively stable with $\lambda=0.2, 0.3, 0.4$. $\rho_1,\rho_2=0.7$ may include too many states in the real dataset for discriminator training, so the performance drops accordingly.
**Q2: Marginal improvements in HalfCheetah**
A: We agree with the reviewer that ASOR is most beneficial in environments where accessibility varies, such as the fall-guys like game environment. As discussed in Sec. 4.2 (Lines 380~406), the agent will not fall over in the HalfCheetah environment, so the state accessibility will be more likely to remain unchanged under dynamics shift. This undermines the effectiveness of ASOR and leads to the marginal performance improvements.
**Q3: More training steps in MetaDrive**
A: We clip at 1M episodes to make training steps consistent with other environments. We will add the full training log in the revision.
**Q4: The scale of fall guys-like game environment**
A: The fall guys-like environment has a 1065-dimensional state space containing terrain, map, target, goal, agent, and destination information. Although there are no visual inputs, the state space contains diversified information that is challenging to integrate. Meanwhile, due to the highly dynamic environment and complex interactions between the agent and the environment components, a large number of environment steps is needed to train a decent policy. That is why we call the fall guys-like environment "large scale". The scale of the action space will not influence the effectiveness of ASOR, as the proposed policy regularization objective relies solely on the state distribution and do not take the action or policy distribution into account.
**Q5: Restrictiveness of the assumption**
A: As discussed after Def. 3.3 (Lines 208~219), the assumption is weaker than those in existing methods and less restrictive. It requires state $s'$ can be accessed from $s$ after dynamics shift, as long as it is accessible from $s$ in the original dynamics. It does not require a specific policy to do so and does not limit the number of intermediate states. Such assumption still holds in two of the mentioned extreme cases in that the state accessibility does not change.
**Q6: Different optimal policy**
A: According to Eq. (2,3), regularized policy optimization achieves performance lower-bounds without assumptions on how close the optimal policies are. Intuitively, the regularization provides a reasonable starting point for policies to explore more efficiently, but the starting point is not necessarily optimal. The walker2d environment are known to have many different near-optimal policies, while our ASOR algorithm can still achieve the best performance (Fig. 2).
**Q7: Stability of GAN training**
A: Due to the limited word count, we refer the reviewer to Q4 in the rebuttal to Reviewer UQdV.
---
Rebuttal Comment 1.1:
Comment: The authors' response has largely addressed my concerns. I have raised my rating by one point.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's prompt and positive response. We thank the reviewer for all the time and efforts in reviewing this paper and will update the paper as required. | Summary: The paper pinpoints a flaw in existing IfO methods where state inaccessibility due to changing environment dynamics can disrupt the similarity of expert state distributions. To tackle this, it presents a policy regularization approach centered on globally accessible states. The proposed framework combines reward maximization and IfO via F-distance regularized policy optimization, leading to the development of the ASOR algorithm through different F-distance instantiations. Experiments across multiple benchmarks demonstrate its effectiveness in enhancing cross-domain policy transfer algorithms, outperforming baselines, and theoretical analysis offers performance guarantees for policy regularization during dynamics shift.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes. For example, Theorem 3.5 shows the lower bound of the learning policy's performance when using JS divergence for F-distance regularization.
Experimental Designs Or Analyses: The experimental designs are sound. In offline RL benchmarks, the authors collect static datasets from environments with different dynamics and compare ASOR with multiple baseline algorithms. In online continuous control tasks, they explore different sources of dynamics shift. In the fall guys-like game environment, they consider highly dynamic and competitive scenarios. The ablation studies in the MuJoCo tasks help to understand the role of different components and hyperparameters in ASOR.
Supplementary Material: Yes. I checked the "Comparisons with Previous Approaches" and "Examples of Distinct State Distributions" parts.
Relation To Broader Scientific Literature: The paper builds on the existing literature of cross-domain policy transfer and Imitation Learning from Observation. It addresses the limitations of previous IfO methods that assume similar state distributions across different dynamics.
The paper defines globally accessible states, providing a new way of looking at state spaces in the context of learning from diverse dynamics.
Essential References Not Discussed: No
Other Strengths And Weaknesses: ### Strengths:
- The paper is well-written and the concepts are clearly explained.
- Policy regularization on globally accessible states is a novel way to handle dynamics shift in RL.
### Weaknesses:
- Although the definition of accessible states is clear in theory, it may be extremely difficult to accurately determine whether a state is globally accessible in a real-world environment. Real systems may have noise, unmodeled factors, or overly complex dynamic changes, making it hard to determine whether there exists a policy that can access the state with a non-zero probability for all values of the dynamic parameters.
- Sampling from and estimating distributions from different distributions are crucial steps in the algorithm. However, in practice, accurately sampling from these distributions can be challenging, especially sampling from $d^{*, +}_{T_0}(\cdot)$. If the sampling is inaccurate, it may lead to a deviation in the estimation of the state distribution, thereby affecting the training of the discriminator and the learning effect of the policy.
Other Comments Or Suggestions: No.
Questions For Authors: No.
Ethical Review Flag: Flag this paper for an ethics review.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive and insightful comments. Responses are provided as follows.
**Q1: Determining accessibility in practice**
A: Tasks with potentially empty globally accessible state set are indeed extreme cases where ASOR may degrade into the base RL algorithm. An example proposed by Reviewer UQdV is when we want to learn a walking policy for robots with either normal legs or crippled legs, and with different heights. In this case, as the robots may never perfectly achieve the same status, the globally accessible state will become an empty set. But two practical factors can help alleviate such concern.
- One may regard the state dimension that remains distinct across dynamics as hidden environment parameter and do not include such information when identifying globally accessible states. For example, the robot height in the mentioned example can be excluded in the state space. States that are visited by expert policies with different robot height, e.g., smooth walking with moderate speed, can be regarded as globally accessible and preferred when training new policies.
- In the practical algorithm procedure of ASOR, a relative accessibility detection mechanism is applied. States that are most likely to be accessible in a training batch are regarded as preferable (Line 7 in Algorithm 1). Even though there are no states that are perfectly accessible across dynamics, we can rely on the discriminator network to automatically find the most appropriate states to assign higher rewards (Line 8 in Algorithm 1).
**Q2: Sampling from the accessible distribution**
A: According to Sec. 3.5 of the paper, instead of directly sampling from the intractable accessible state distribution $d_{T_0}^{\*,+}(s)$ , we estimate the its likelihood ratio with $d_T^\pi(s)$. The key property used here is Prop. 3.8, which shows that the likelihood ratio can be obtained by optimizing a classifier similar to the GAN discriminator, as long as the real data are sampled from the state distribution in the molecular of the ratio and the fake data from the denominator. According to Eq. (4), the likelihood ratio $\frac{d_{T_0}^{\*,+}(s)}{d_T^\pi(s)}$ can be decomposed into the production of state optimality and accessibility. We therefore use the state value function and state visitation pseudo-count approaches to split the training batch and create classifier training data. In this way, we manage to obtain an estimation of the likelihood ratio, which serves as the augmented reward in practice. | Summary: This paper studies policy learning in cases where the environment dynamics may vary during training. Even when expert demonstrations or replay buffers are provided, some states within them may be inaccessible. In such situations, the policy should identify which states are globally accessible and consider only the information from them.
To address this, the paper proposes ASOR, an add-on GAN-like module that identifies states with high values or proxy visitation counts. ASOR is integrated with existing RL methods and achieves superior or competitive performance compared to baselines on various benchmarks, including Minigrid, D4RL, MuJoCo, and Fall Guys-like battle royale games.
Claims And Evidence: This paper highlights the potential impact of changes in the environment's dynamics and their consequences—specifically, the accessibility of states must be considered when learning from collected transitions. Otherwise, the policy may be misled, as it was trained under different dynamics and scenarios.
[+] In my opinion, the claim appears sound, and the theoretical support—ranging from infinite samples and finite samples to a practical GAN-like implementation—seems accurate. Moreover, the paper conducts comprehensive experiments to empirically validate its claims.
[-] However, one concern I have is that even if we determine that a given task and environment can be suitably defined in the form of HiP-MDP, how can we obtain the required information in practice? For example, how do we estimate the accessible state distribution? How many orders of magnitude of transitions are required to obtain a reliable estimate? What should be done if there is a safety issue?
Methods And Evaluation Criteria: The ASOR method has the following major components or strategies, including a GAN-like training method, tracking state values and proxy visitation counts, and using a Lagrangian multiplier to augment rewards with the discriminator's output.
[+] These designs make sense to me. Intuitively, they would make the policy more cautious when visiting areas that are likely to change over time. These designs are also supported by the ablation studies (Table 2, Figure 3).
[+] The results for each environment are tested over multiple rounds, and the standard deviation of the method's performance is reported as well.
[-] One question I have: while ASOR may encourage the policy to focus on areas that are not affected by changes in dynamics and have high values, how can it determine which dynamics it is in during inference? For example, taking the lava scenario in Figure 1, the policy may know that there is uncertainty in the intersection areas of rows 2, 3, and 4 and columns 4, 5, and 6. But when it stands at (2, 3), how does it know which situation it is facing now?
Theoretical Claims: [+] As mentioned above, the theoretical claims are sound, and their proof seems accurate. Those toy examples and illustrations play a critical role, helping the reader follow the derivation even though it is somewhat cumbersome.
[-] Some notations use superscripts and subscripts to represent different meanings, which can easily be confused while reading, such as $d_{T}^{\pi^{*}}(s)$ , $d_{T}^{\*}(s)$, $d_{T_{\theta}}^{\pi}(s)$, and $d^{\pi, +}_{T}(s)$. If a clearer and more concise way of expression exists, the theoretical paragraphs will be easier to read.
Experimental Designs Or Analyses: [+] ASOR is compared with multiple baseline methods in environments across different fields. Although ASOR achieves sub-optimal results in some cases, I acknowledge that it performs the best overall.
[+] This paper analyzes the experimental results in depth, rather than simply reporting the results of each method, which is commendable.
[-] I believe diffusion and maximum entropy (max-ent) policies should be compared in the experiments, as these policies are effective at handling sudden changes in the environment. Comparing with them and still achieving the best performance would make ASOR's effectiveness and superiority even more convincing.
[-] Another experiment worth exploring is the scenario when the dynamics during training and testing deviate. Since in Algorithm 1, the dynamics are randomly sampled, it may happen that a particular dynamic is rarely sampled, and thus, the policy may not learn well from transitions of that dynamic. One possibility is that a dynamic is excluded during training but may be sampled during inference. In this case, what would be the difference in performance between each method?
Supplementary Material: Due to limited time, I quickly reviewed the entire appendix but did not carefully check the proof for correctness.
Relation To Broader Scientific Literature: [+] Although HiP-MDP is a relatively new and niche research direction, I find it more aligned with practical scenarios and believe it has the potential to enhance RL applications in real-world settings. I appreciate efforts to explore this direction and propose novel methods to push its boundaries.
Essential References Not Discussed: [+] Since there is little literature related to HiP-MDP, I believe this study sufficiently cites and discusses the literature in this field.
[-] However, it would make the literature review more complete if HiP-MDP were discussed alongside other potentially related MDP settings, such as LMDP [R1]. Additionally, diffusion and max-ent policies are known to be good at dealing with environmental (dynamics) changes, and I believe they should be discussed and examined in this work.
[R1] Zhou et al., "Horizon-Free and Variance-Dependent Reinforcement Learning for Latent Markov Decision Processes," ICML 2023.
Other Strengths And Weaknesses: [+] The proof, descriptions of the baseline method, and details of the evaluation environments provided in the appendix allow readers to more fully examine how the study was conducted and provide sufficient reproducibility.
[-] A minor typo is found at line 383: "ASOR's will have ...".
Other Comments Or Suggestions: All my comments and suggestions are listed in the appropriate fields above. Since I have rarely engaged in HiP-MDP research before, I am giving a relatively conservative initial recommendation of 'weak acceptance.' However, I am happy to raise my recommendation if my concerns are addressed during the discussion phase and no significant or widely shared concerns are raised by other reviewers.
Questions For Authors: My concerns and questions have been summarized in the above fields, please consider addressing those points marked with [-].
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive and insightful comments. Responses are provided as follows. The linked file is available [here](https://anonymous.4open.science/api/repo/ICML_ASOR_Rebuttal-01CB/file/rebuttal_append_file.pdf).
**Q1: Practical estimation concerns**
A: We discuss in Sec. 3.5 on how to obtain practical estimations. Instead of directly estimating the intractable accessible state distribution $d\_{T_0}^{\*,+}(s)$ , we estimate the its likelihood ratio with $d_T^\pi(s)$. The key property used here is Prop. 3.8, which shows that the likelihood ratio can be obtained by optimizing a classifier similar to the GAN discriminator, as long as the real data are sampled from the state distribution in the molecular of the ratio and the fake data from the denominator. According to Eq. (4), the likelihood ratio $\frac{d_{T_0}^{\*,+}(s)}{d_T^\pi(s)}$ can be decomposed into the production of state optimality and accessibility. We therefore use the state value function and state visitation pseudo-count approaches to split the training batch and create classifier training data. In this way, we manage to obtain an estimation of the likelihood ratio, which serves as the augmented reward in practice.
For the amount of transitions required, we refer the reviewer to Fig. (3)(right), where the red line (extra loss) contains the loss of training the estimation network. It takes roughly 1M steps to obtain a decent estimation, which is 1/6 of the total policy training steps. In other words, a relatively accurate likelihood ratio estimation is easier to obtain compared with a good RL policy, and can provide reasonable guidance in the policy training process.
When there are safety issues, safe RL techniques can be integrated as ASOR does not change the training pipeline of the base algorithms. One may also increase the coefficient of the augmented reward because it encourages the policy to stick to states that are more likely to be visited by expert policies in different dynamics.
**Q2: Dynamics identification**
A: Different base algorithms have their own approach of identifying environment dynamics, and ASOR as a general reward augmentation algorithm is agnostic to these approaches. For example, MAPLE and ESCP use context encoders trained with auxiliary loss to detect dynamics change. In the fall-guys like game environment, we include transformers in the policy network to automatically determine environment dynamics from history state-actions. In the lava scenario, we add environment information in the state space for simplicity. Apart from location coordinates, the state space has a 0-1 variable indicating whether there is a lava block nearby (Lines 153-158). In Fig. 1 (up), the agent at (2,3) will have (2,3,1) as the state; In Fig. 1 (bottom), the state will be (2,3,0). The agent can therefore be informed which situation it is facing.
We will add the aforementioned discussion to the revision.
**Q3: Confusing notations**
A: Sorry for the confusing notations. $d_T^*(s)$ is the brief version of $d_T^{\pi^*}(s)$ and means the same distribution. For the accessible state distribution such as $d^{\pi,+}_T(s)$, we plan to use a new distribution notation $\delta^\pi_T(s)$ for better readability.
**Q4: Experimental designs**
A: Thanks for recommending diffusion policies as baseline. We consider the Diffusion-QL algorithm and refer the reviewer to Tab. 3 of the linked pdf for comparative analysis. It exhibits better performance than CQL especially in the Hopper environment, but cannot outperform the average performance of MAPLE+ASOR. This can be because Diffusion-QL tries to recover the state-action joint distribution in the offline dataset. Such distribution will be unreliable under dynamics shift since given the same state, the optimal action may be different in different dynamics. For maximum entropy policies, we included SAC as baseline in online RL experiments. Meanwhile, ESCP and ESCP+ASOR are built upon SAC themselves and will benefit from the maximum entropy training objective.
For experiments with deviated dynamics during training and testing, i.e., testing with out-of-distribution dynamics, we refer the reviewer to Fig. 1 of the linked pdf. We utilize the OOD setting in ESCP, where test environment parameters, including the damping coefficient and the wind speed, are sampled from distributions that are 33% broader than the training distribution. In this scenario, HalfCheetah and Ant witness performance drops across different algorithms. Algorithms show almost no performance changes in the Walker2d environment on unseen test environments. This may be because dynamics shift exerts less influence on the walker agent. In both scenarios, ESCP+ASOR still achieves the best results, demonstrating its ability to efficiently train a more robust policy.
**Q5: Literature completeness**
A: Thanks for bringing LMDP and diffusion RL research into our attention. We will add discussions on these related works.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' effort in addressing my concerns and questions. All my concerns have been well addressed or clarified. After reviewing the discussions in other reviewers' threads, I did not find any major concerns shared among the reviewers. Therefore, I am happy to raise my score to 'Accept' to acknowledge the authors' effort. Great job, and good luck!
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's prompt and positive response. We thank the reviewer for all the time and efforts in reviewing this paper and will update the paper as required. | null | null | null | null | null | null |
On the Power of Context-Enhanced Learning in LLMs | Accept (spotlight poster) | Summary: The paper introduces *context-enhanced learning* as a training setup where an LLM is fine-tuned on a task with extra context provided, but no gradient is directly taken on that context.
The main claims of the paper are:
1. Context-enhanced learning improves training sample efficiency on reasoning tasks. This is presumably because the phrasebook rules are internalized atomically (*i.e.* the model is learning to apply intermediate steps, as opposed to memorizing end-to-end shortcuts).
1. Context-enhanced learning only works with models that are ICL capable.
2. Context-enhanced learning only works if you dropout parts of the context during training.
2. Phrasebooks are processed sequentially layer-by-layer.
3. It is challenging to prompt the model to regurgitate the rules of the intermediate phrasebooks without token filtering.
Claims And Evidence: Overall, the claims made in this paper are well-supported by empirical evidence on **synthetic tasks**. However, in my view, the main limitation of this work is that there are no experiments on real data and even the synthetics are limited to a single synthetic task.
1. This claim is supported by the results in Figure 2 as well as the theoretical results in Section 5. It would be nice to see a demonstration of this phenomenon on real data or other synthetic tasks.
2. I found the mechanistic analysis in Figures 3 and 4 to be quite compelling for this claim — and the presentation was clear.
3. There seems to be some evidence on synthetic data supporting this claim in Section 4. However, I’m concerned that this result may not transfer when moving to real language — when dealing with real language, couldn’t there be clever prompting strategies that an adversary could use to elicit the rules? Without any results real data, the claims about the privacy preserving capabilities of context-enhanced learning feels quite speculative.
Methods And Evaluation Criteria: Overall, the authors make clever use of synthetic datasets and mechanistic analyses to illustrate their point.
As I discuss above, the paper would be greatly strengthened if it included experiments on real language data.
Theoretical Claims: I did not carefully check the correctness of the proofs, but read the theoretical claims closely. The theoretical tools the authors use make sense and their theorems align nicely with their empirical claims.
Experimental Designs Or Analyses: Yes see discussion of claims above.
Supplementary Material: I reviewed Appendix C.3 and Appendix D.
Relation To Broader Scientific Literature: The paper studies an important open problem in the literature: how do we get models to perform sequential chain of thought without explicitly supervising the model to do so. Context-enhanced learning is a creative alternative to explicit reasoning supervision that I have not seen explored in recent literature.
Essential References Not Discussed: There are a number of prior works that study sequential reasoning tasks with synthetics and theory that could be important to cite given the mechanistic analysis of sequential reasoning. Consider mentioning them in related work.
https://arxiv.org/abs/2310.07923
https://openreview.net/forum?id=NikbrdtYvG#discussion
https://openreview.net/pdf?id=2N3CtUdoB0
Other Strengths And Weaknesses: The paper’s presentation quality is excellent.
Perhaps the most significant limitation is how well these findings translate beyond the neat confines of synthetic experiments. Real-world tasks rarely come with a perfectly curated “phrasebook” of rules that can be placed in the context. **One concern is the availability and quality of context**: The method assumes we have additional helpful text for each training example. In practice, obtaining such aligned context might require extra annotation or the output of a stronger model.
Other Comments Or Suggestions: **Clarification and typos.**
- In Figure 2, I can’t see the pink or brown lines. Presumably they are underneath the random guess line? Is there someway this can be made more clear.
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you so much for your constructive comments and questions!
Our responses to your questions and concerns are as follows:
---
### Applying Context-Enhanced Learning to Real World Data
Context-enhanced learning (CEL) has been shown to outperform standard training or fine-tuning approaches that do not leverage in-context information across a range of real-world tasks, both during pre-training (e.g., Allen-Zhu et al., 2024; Gao et al., 2025) and fine-tuning (e.g., Liao et al., 2024; Zou et al., 2024) (see lines 036–044).
Following your suggestion, we conducted fine-tuning experiments on the GSM8K dataset using two base models—Qwen-7B and LLaMA 3.1 8B—adapting the context-enhanced learning approach from prior fine-tuning works. Specifically, we included helpful in-context examples during training to enable CEL.
We experimented with three strategies for selecting in-context learning (ICL) examples:
* **Fixed ICL**: A fixed set of in-context examples is used for all training instances. We use the ones used by Qwen-2.5 models for their ICL evaluations.
* **Random ICL**: Five in-context examples are randomly sampled for each training example individually.
* **Skill-based ICL**: In-context examples are selected based on skill similarity, following the strategy proposed in [1].
In all three cases, we find that CEL outperforms baseline fine-tuning, which only uses question–answer pairs without additional context. Evaluation is performed zero-shot on the trained models. Our fine-tuning approach for CEL follows the Annealing Dropout strategy.
|Pre-trained Model| Baseline (SFT) | Fixed-ICL CEL | Random-ICL CEL| Skill-based ICL CEL|
|:--:|:--:|:--:|:--:|:--:|
|Qwen-7B base|85.5|87.4|87.9|88.4|
|Llama-3.1 8B base|76.5|77.4|76.5|77.2|
---
### Difficulty of Curating In-Context Curriculum
>"The method assumes we have additional helpful text for each training example. In practice, obtaining such aligned context might require extra annotation or the output of a stronger model."
Thank you for raising this important point. Our primary motivation was to theoretically demonstrate that models can learn more efficiently when provided with additional supervision. In practice, such supervision could come from various sources—either via outputs from a stronger model or through retrieval of relevant information from a pre-training corpus.
Prior work has offered promising examples of how such context can accelerate learning. For instance, Gao et al. (2025) and Zhu et al. (2024) show that including metadata—such as source information associated with documents—can lead to faster training and improved performance at test time.
While our current work focuses on analyzing these benefits in a controlled setting, we believe that a key avenue for future research is in developing better methods to obtain and integrate high-quality supervision context for training language models more effectively.
---
### Alternative Ways for Eliciting In-Context Rules
>"When dealing with real language, couldn’t there be clever prompting strategies that an adversary could use to elicit the rules? Without any results real data, the claims about the privacy preserving capabilities of context-enhanced learning feels quite speculative."
We admit that our current evaluation is restricted to verbatim memorization of the phrasebook rules and does not consider adversarial robustness. While our results demonstrate that recovery of phrasebook information is limited in our synthetic setting, we acknowledge that this does not constitute definitive evidence that the information is completely hidden. Additionally, we do not yet provide observations on real-world data. (For the new GSM8K experiments, the context is selected from the same training set, making it difficult to distinguish memorization).
That said, we view this work as the first step toward thinking about privacy preservation in context-enhanced learning settings. Memorization on real world data and in adversarial prompting settings are interesting directions to continue exploring on.
---
### Other suggestions on related work and presentation
Thank you for bringing up the related works and the legibility issue of figure 2 (ablation data points are all overlapping around "near random"). We will surely discuss the mentioned related works and address the figure more clearly in future versions of the manuscript.
---
#### References
[1] Didolkar, Aniket, et al. "Metacognitive capabilities of llms: An exploration in mathematical problem solving." Advances in Neural Information Processing Systems 37 (2024): 19783-19812. | Summary: This paper introduces a new concept called context-enhanced learning for LLMs, where they add context related to the task and training time step t in addition to the training data of x and y, and no loss is applied to the context text. To study how this will help LLM learning, they introduced a task called multi-level translation, which is a bijection task that maps input to output according to phrasebooks. Using this task, they prove in a simplified setting that context-enhanced learning can be exponentially more sample-efficient than standard learning when the model is capable of in-context learning. They also demonstrate mechanistically that the benefit arises from more accurate gradient learning signals and show that it's difficult to detect or recover learning materials used in the context during training.
Claims And Evidence: I have concerns about the experiment in section 3.3 on sequential processing:
When replacing STR(pi i) with STR(pi hat_i), the experiment doesn't control for the positional effects of the replacement. Transformers process information based on both content and position, so the observed changes in representations could simply reflect the model's sensitivity to any change at that position, rather than specifically processing the phrasebook.
A proper control would include replacing non-phrasebook content at the same positions to test whether the observed pattern is specific to phrasebook processing or a general property of how transformers process sequential information. Without this, it's premature to conclude that "the first layer showing a significant difference is identified as the layer where the model begins processing the phrasebook."
Methods And Evaluation Criteria: please see below.
Theoretical Claims: no i didn't check fully.
Experimental Designs Or Analyses: please see below
Supplementary Material: I reviewed some of the appendix content.
Relation To Broader Scientific Literature: The paper's concept of context-enhanced learning extends the Learning Using Privileged Information, and one of there method of annealing relates to curriculum learning.
Essential References Not Discussed: no
Other Strengths And Weaknesses: Pros:
1. The paper introduces an interesting concept and clean formalism along with an interesting synthetic task to test the idea.
2. The experiments suggest that context-enhanced learning significantly improves sample efficiency on the MLT task.
3. Extensive mechanistic experiments are conducted.
4. Strong theoretical analysis showing an exponential gap in sample complexity.
5. The paper offers privacy implications for data security and copyright through the difficulty of recovering context materials.
Cons:
1. The MLT task is artificial and synthetic with explicit rules. In real-world tasks like math or coding, there aren't many explicit rules that are universal and guarantee the mapping from a coding question to a block of code. It would greatly improve the paper if the authors had experiments on how to apply context-enhanced SFT in real-world tasks.
2. i have concern in sec 3.3 regarding proper control is needed to test whether the observed pattern is specific to phrasebook processing or a general property of how transformers process sequential information.
Other Comments Or Suggestions: 1. the "No Dropout (ablation)" curve is invisible from figure 2.
2. In the paper, they have showed "Context-enhanced learning from an ICL-capable model greatly improves training sample efficiency.", what about overall training efficiency in terms of training FLOPs? Since adding the context introduces additional computation in the attention calculations, although no loss is taken on the context.
3. What will happen if you take loss on the context, will that leads to even better sample efficiency?
Questions For Authors: 1. In the line 165 "• Annealing Dropout: A better strategy: for s1, select the necessary rules from Π∗ plus 25% unused rules.", how are the necessary rules selected and what's the unused rules?
2. Have you considered applying this approach to more realistic tasks like math or coding problems?
3. In figure 2, i'm assuming no dropout is overlapping with no-context, why is this happening? Why is using all the useful context not improving sample efficiency. If "No Dropout" is not improving sample efficiency over "No Context," This might contradict the paper's central claim that context enhances learning.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you so much for your constructive comments (especially regarding section 3.3) and questions!
Our responses to your questions and concerns are as follows:
---
### Definition of Necessary Rules and Unused Rules (line 165)
For an input sequence $s_1$ of length 20-40 tokens, each translation step will involve 10-20 2-tuple translations specified by the phrasebook of that step. Out of the $n^2$ total rules of that phrasebook, we define the set of necessary rules for that input sequence to be the phrasebook entries involved and the set of unused rules as its complement.
*We will also use the concept of “necessary rules” and “unused rules” in an additional experiment for introducing proper control on the mechanistic experiment (see below).
---
### Proper Control in Section 3.3
To rule out the possibility that the affected depth is only related to the position of perturbation instead of semantic content, we introduce a new set of experiments ([see new figure](https://ibb.co/ym187Skp)). For each phrasebook $\pi_ i$, we selectively perturb tokens of 10 necessary rules (see the first row) or 10 unused rules (see the second row). Note that $STR(\pi_ i)$ is randomly shuffled, so these necessary rules and unused rules are interleaving in position.
From the figure, perturbing necessary rules result in a qualitatively similar figure as in the current manuscript (with clear difference starting at certain layers), but perturbing unused rules around the same positions yields negligible representation difference across all layers. This suggests that the representation difference is reflecting the model’s processing of the **useful phrasebook information** with respect to the current translation task. We will replace fig.3 with the new figure in the next version of the manuscript.
---
### Real-World Tasks
We have conducted additional experiments using context-enhanced learning to improve math capabilities. Please refer to [our response to reviewer PGoK](https://openreview.net/forum?id=Gn6L4QRKf7¬eId=qbF30B1xu4).
---
### FLOP Improvement
Empirically, for $n=8, d=5$ (results reported in Figure 2), the SFT baseline with 1000k samples took around 14 hrs while the annealing dropout curriculum with 100k samples took around 2 hrs to complete on the same device.
Theoretically, the full phrasebooks are of length $O(n^2d)$, so including them will lead to a slow down of $O(n^4d^2)$ in FLOPs (quadratic for attention). With large $d$, the $\exp(d)$ sample efficiency improvement will still dominate the polynomial slowdown per sample induced by in-context phrasebooks.
---
### Training with Loss on Context
Following your suggestions, we reran the annealing dropout experiments with $n=8, d=5$ while explicitly computing the next-token prediction loss on the context. We report the test accuracy for various number of training samples and compare it with the context-enhanced learning case (in Figure 2).
|# Samples | 10,000 | 25,000 | 50,000 | 100,000 | 250,000 |
|---|:--:|:--:|:--:|:--:|:--:|
| No Loss on Context | 12.7% | 20.0% | 100.0% | 100.0% | 100.0% |
| Loss on Context (new) | 15.6% | 31.6% | 98.1% | 97.1% | 98.9% |
In general there is no qualitative difference in sample efficiency when we train with loss on context.
To understand what other difference does the additional loss incur, we conduct the following additional experiments:
* We conducted the same verbatim memorization test as in section 4 on these models. The checkpoint trained with just 10k samples already attained a 99.8% recovery success rate, demonstrating very strong verbatim memorization of the phrasebooks despite not giving any additional benefits.
* We [reproduced figure 4 for the model trained with loss on context](https://ibb.co/ccn6NsNk) trained with 250k samples and observe similar localized storage similar to the case of having no loss on context.
* We also conducted the No ICL ablation with loss on context, resulting in the near random accuracy for all settings. Even if we are allowed to take loss on the context, the MLT-ICL-capability is still crucial for improved sample efficiency.
The three observations above suggests that *verbatim memorizing the phrasebook* and *being able to translate without phrasebook in context* are likely to be detached. Computing loss on the context does not fundamentally change the mechanism of internalization.
---
### Bad Performance of No Dropout
We would like to clarify that our central claim (both empirical and theoretical results) all require proper dropout in the helpful context.
In the "No Dropout" scheme, the model would always have very low loss throughout training. This is because we start from an MLT(n,d)-ICL-capable model, which performs the task perfectly when conditioning on full context. Thus there would be little learning signal pushing the model to internalize the phrasebooks if we still always condition on full context.
When no context is dropped, we would expect no internalization. | Summary: This paper investigates the power of context-enhanced learning in large language models. The authors propose a synthetic machine translation task that utilizes phrasebooks to transform initial strings. Experimentally, the paper demonstrates that if the base model is MLT(d, n)-ICL-capable, context-enhanced learning enables more sample-efficient adaptation to MLT_{\Pi_}-capability. Furthermore, the study establishes that there is no information leakage regarding \Pi_* in the SFT model. Using a mechanistic interpretability approach, the authors analyze the trained model’s behavior to distinguish between an ICL-capable model and an MLT_{\Pi_*}-capable model. Theoretically, the paper shows that standard SFT requires an exponential number of samples to learn an MLT_{\Pi_*}-capable SURR-MLT, whereas context-enhanced learning only requires a polynomial number of samples.
Claims And Evidence: The paper provides extensive experimental results, a mechanistic interpretability analysis, and theoretical justifications to support its claims. The presented evidence appears sufficient to substantiate the claims.
Methods And Evaluation Criteria: The methodology and evaluation criteria are sound.
Theoretical Claims: I did not verify the proofs in detail, but they appear sound at first glance.
Experimental Designs Or Analyses: The experimental results appear robust.
Supplementary Material: I didn't review the supplementary materials
Relation To Broader Scientific Literature: The paper addresses training methods for LLMs, a topic that is closely related to the broader scientific literature.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The proposed framework of context-enhanced learning and the design of the synthetic MLT task is novel. The contribution is solid.
The paper was somewhat challenging to follow on the first read due to the dense mathematical formulations and technical descriptions of the task and methods. The second read was more manageable. Including a figure to illustrate the concept of context-enhanced learning, the MLT-ICL task, and the training/testing methodology for context-enhanced learning would greatly improve readability. However, I acknowledge that space limitations may make this difficult.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. There appears to be an intriguing OOD generalization effect in the SFT and testing phases of the downstream MLT_{\Pi_*} task: the training prompt includes part of the phase codes, whereas the test prompt does not. What accounts for this OOD generalization? Standard empirical risk minimization cannot explain this behavior. Is the generalization driven by the base model’s capabilities (Llama 3.2-3B), or by its MLT(d, n)-ICL capability? In other words, if one were to train an MLT(d, n)-ICL-capable model from scratch instead of fine-tuning a pretrained Llama model, would it still be efficiently fine-tuned to become MLT_{\Pi_*}-capable?
2. I did not fully understand how the “SQ dimension of MLT(d, n)” implies that “any algorithm attempting to learn an MLT_{\Pi_*}-capable SURR-MLT requires at least n^d sample complexity.” Could the authors provide further clarification on this reasoning?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you so much for your comments and insightful questions!
Our responses to your questions are as follows:
---
### What accounts for the OOD generalization to empty context?
**Short Answer**: We believe this OOD generalization is fundamentally a form of compositional generalization, which is achieved by the compositional-friendly sequential translation structure in the MLT(d,n)-ICL-capable model. Following your suggestion, we have tried training an MLT(d,n)-ICL-capable model from scratch, which was proven to be difficult. The current OOD capability is likely a joint result from the base model and the ICL-capablilty training. We expect that as long as the MLT(d,n)-ICL-capable initialization has similar compositional-friendly mechanistic nature, we would see similar OOD behavior.
**More details:**
**Mechanism of the OOD Generalization**
We believe two mechanistic properties of the reported model played an important role in this generalization phenomenon (inferred from section 3.3, with additional evidence in response to reviewer vWg8):
1. The sequential structure of translation in the model, in which each step can take information from context or the weights.
2. Phrasebook information at different translation stages are internalized in a localized manner, compensating for the corresponding dropped phrasebook information in the context.
During training, when certain phrasebook entries are dropped from the context while others remain, the dropped information will be learned into the weights of certain transformer layers. When we perform sufficiently many random dropouts on the context, all phrasebook entries will be dropped and internalized at some point despite the context being never empty. Therefore the model will be robust to complete removal of the context since every phrasebook information it needs has been internalized.
Our theoretical model (Surr-MLT, see Def 5.2) is built upon this rationalization. It can achieve full accuracy when conditioning on nothing at test time despite that only one rule is being dropped in each step in training.
>“If one were to train an MLT(d, n)-ICL-capable model from scratch instead of fine-tuning a pretrained Llama model, would it still be efficiently fine-tuned to become $MLT_{\Pi^*}$-capable”
**Response**: We tried training the same 3B model to be MLT(5, 8)-ICL-capable from scratch with various configurations. However even with a million random set of phrasebooks the model is still unable to learn any pattern. Thus unfortunately we cannot empirically answer this question. However if the model trained from scratch also shares similar mechanistic structure as discussed above, then similar sample efficiency may appear.
>”Is the generalization driven by the base model’s capabilities (Llama 3.2-3B), or by its MLT(d, n)-ICL capability?”
**Response**: As discussed above, it is hard to separate the effect from the base model and the MLT(d, n)-ICL capability since the base model affects how the specialized MLT(d, n)-ICL capability (i.e. the sequential translation structure) is formed in the model parameter. Whether the base model is strictly necessary remains unclear.
---
### SQ dimension and Sample Complexity of SGD
**Short Answer**: We stated in the main text that “any algorithm attempting to learn an $MLT_{\Pi^*}$-capable SURR-MLT requires at least $n^d$ sample complexity.” This was an informal statement. A more precise version is: any algorithm that attempts to learn the task by querying distributional properties of $MLT_{\Pi^*}$ and receiving noisy estimates in return—i.e., algorithms operating within the statistical query (SQ) framework—requires at least $n^d$ sample complexity.
**More details**: The formal SQ dimension (as defined in Section F.1 of the appendix) characterizes the computational difficulty for algorithms in the SQ model. In this framework, algorithms interact with an oracle by querying expectations over properties of the target task on data distribution and receive approximate (noisy) answers. A prototypical example is stochastic gradient descent (SGD), where the model estimates gradients based on average loss over a batch. The estimation noise depends on the batch size, along with potential adversarial or quantization noise. For problems with high SQ dimension (e.g., exponential in input size), it can be shown that the number of samples or gradient steps required by such algorithms becomes exponential.
However, these lower bounds do not extend to algorithms that lie outside the SQ framework. For instance, if an algorithm has access to auxiliary information that directly hints at the target function (like phrasebooks in the MLT task), it can bypass the SQ limitations by exploiting this extra structure. In such cases, the learning process can be significantly more efficient due to this inductive bias.
---
### Presentation
Thank you for your advice on the presentation! We will add illustrative figures in future versions of the manuscript. | Summary: The authors study the impact of augmenting context with additional data for learning in LLMs. Notably, on the added data no autoregressive gradients are computed. The authors consider a stylized problem of multi-layer text translation over finite alphabet where text from one language is translated to another in $d$ steps, where each step uses a phrasebook and is invertible. The paper differentiates the learning difficulty between when all the phrasebooks are augmented in the context vs absence of such context enhancement. Notably, these phrasebooks are not provided at test time. Therefore, explicit CoT is avoided and the model operates in silent/internalized CoT mode adding it's own CoT before producing the results.
The authors tune the Llama 3.2B model with to expose the LLM to the specific translation task using SFT with random phrasebook. Next, they train the model using the context-enhanced learning technique with Dropout and Annealing to force the model to not memorize. The experiment show without dropout or annealing the test time performance suffers. This is augmented with interesting mechanistic observation of the learned model.
The authors next establish exponential sample complexity improvement with context-enhanced learning using a stylized learning model. They show without context enhancement the learning task has a $n^{\Omega(d)}$ with $d$-layer translation over $n$-sized alphabet (under statistical query model). However, with context enhancement a heuristics search algorithm exists with $O(d n^6 )$ sample complexity, and for $d=2$ a gradient based algorithm exists with sample complexity $O(n^4)$.
Claims And Evidence: The authors claim that for multi-layer translation task context-enhanced learning boosts learning of LLMs, even when the context-enhancement is not present at test time. They prove their claims both experimentally, and on a surrogate model for learning.
Methods And Evaluation Criteria: This paper studies the utility of context enhancement in learning for LLM. The authors study this on a specific task, namely multi-layer translation. The methods seem logically sound. The mechanistic interpretation of the learning dynamics provide further evidence that context enhancement helps with multi-layer translation.
Theoretical Claims: The authors claim that context enhancement provides exponential improvement in the learning of multi-layer translation in a surrogate learning model. They also show that the surrogate model can be a simulated using transformers.
Experimental Designs Or Analyses: The designed experiment is non-standard but seems suited for understanding the effectiveness of context enhancement for multi-layer translation. The loss evolution with various different annealing/dropout method shows the effectiveness of the context enhancement.
The mechanistic explanations also provide some structural insights into the learned models.
Supplementary Material: I have not looked into the proofs in detail, but the proof logical flows seem correct.
Relation To Broader Scientific Literature: This paper provides new insights into the utility of contexts in LLM learning, albeit in a very stylistic setup.
Essential References Not Discussed: I have basic, but not in-depth, knowledge of the literature, and I did not find any important references lacking. But I may have missed some references given I lack in-depth knowledge of the literature.
Other Strengths And Weaknesses: Strengths
- The results, theoretical, mechanistic, and experimental, presented in the paper seems insightful in showcasing the effectiveness of context enhancement.
- The exponential improvement due to context enhancement in the surrogate learning model seems
Weakness/Questions
- The multi-layer translation task presented does not shed light into the generalization capability of the technique. It seems the model learns the rules of translation through context enhanced learning. But due to the assumed bijection of the translation rules (phrasebooks) the setup only shows quicker memorization (not generalization capabilities). Does that capture the scope of the work?
- The authors consider the setting where context is not enhanced at test time. What is the motivation behind?
- In the surrogate model the phrasebooks are combined with the model weights, a more natural representation would be augmentation of these at the input space. This choice is somewhat unsatisfactory. The transformer model that mimics this surrogate learning also doe snot include the phrasebooks at the input place, so the claim that standard transformer model can replicate the learning seems misleading.
Other Comments Or Suggestions: See Weakness part.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you so much for your comments and insightful questions!
Please find our responses to your questions / concerns below.
---
### Generalization in Context-Enhanced Learning
>"The MLT task does not shed light into the generalization capability of the technique ... the setup only shows quicker memorization (not generalization capabilities)."
**Response:** Our main message goes beyond quicker memorization of the phrasebooks. We also highlight that:
1. Even when **no next token prediction loss** is not computed on the phrasebook tokens, their simple presence in context can still accelerate learning.
2. The model does not need to verbatim memorize the phrasebooks (as shown in Section 4). This is beyond the usual notion of memorization in language model training.
Within MLT there is also OOD generalization behavior: the model can perform the task with empty context at inference despite only 20% of the rules are dropped during training. This suggests that it has learned to generalize by composing rules at inference time in an OOD manner (see lines 248–249).
Please also check our response to reviewer **evJV** on understanding this OOD phenomenon and response to reviewer **PGoK** demonstrating effectiveness of context-enhanced learning on real-world data.
---
### What is the Motivation of Having no Context at Test Time
**Response:** Our motivation is twofold: philosophical and theoretical explanation of existing empirical works leveraging context-enhancement.
First, we aim to examine whether models can internalize and benefit from additional information presented during training, even when that information is unavailable at test time. This setting mirrors the human paradigm of open-book learning followed by closed-book testing, and our framework seeks to assess whether models can exhibit similar learning behaviors.
Second, we aim to offer a theoretical perspective on the empirical success of context-enhancement in LLM training. Prior works (e.g. Allen-Zhu et al. (2024) and Gao et al. (2025)) demonstrate that pre-training with auxiliary information improves both training efficiency and final performance — even when such information is absent during inference. Moreover PromptIntern (Zou et al. 2024) and SKIntern (Liao et al. 2024) suggest that context-enhancement can enable more efficient inference by reducing the prompt length. Our work provides a simple theoretical model to help explain these empirical findings and unveil the theoretical potential on sample efficiency.
---
### How Phrasebook Information are Provided to Surrogate Model
>"In the surrogate model the phrasebooks are combined with the model weights, a more natural representation would be augmentation of these at the input space."
**Response:** The surrogate model is motivated by our mechanistic experiments in section 3.
First, the ICL-capable model utilizes the in-context phrasebooks in a sequential manner: phrasebooks of later steps are involved in the translation process in later layers (fig 3). During context-enhanced learning, the phrasebook information of a certain translation step is locally internalized to a small set of layers coupled to the layer processing the in-context phrasebooks for that step (fig 4). Thus we parameterize each translation step as a coupling of an in-context representation $C_i$ and an in-weight representation $W_i$, which is the minimal model capturing the mechanistic behavior of the model before and after the training process.
>"The transformer model that mimics this surrogate learning also does not include the phrasebooks at the input place, so the claim that standard transformer model can replicate the learning seems misleading."
**Response:** We would like to highlight that **the transformer model we constructed in theorem 5.3. is taking all context information from the input sequence.** (see the segment 1 of input in Algorithm 6 and 7, P70-71). Our construction of the transformer is based on an exact reparameterization of the surrogate model:$
\def\R{\mathbb{R}}
\def\bm#1{\begin{bmatrix} #1 \end{bmatrix}}
\def\p#1{\left(#1\right)}
\def\x{\times}
$
Let $e_i\in\R^d$ be the basis vector and let $0_{m\x n}$ be an $m\x n$ all-zero matrix. Let the context-augmented input be $X_1 = [C_1,\dots, C_d, V_1]\in\R^{n^2\x (dn^2+L)},$ then the reparameterized surrogate model is
$\\begin{align*}
X_{i+1} &= X_i\bm{I_{dn^2} & 0\\\\0 & 0_{L\x L}} + \p{\p{X_i\bm{e_i\otimes I_{n^2}\\\\0_{L\x n^2}} + W_i}\texttt{Shift}\p{X_i\bm{0_{n^2d\times L}\\\\I_L}}}\bm{0_{dn^2\times dn^2} & 0\\\\0 & I_{L}}\\\\
&=[C_1,\dots, C_d, 0_{n^2\x L}] + [0,\dots, 0, \p{C_i + W_i}\texttt{Shift}\p{V_i}]\\\\
&=[C_1,\dots, C_d, V_{i+1}].
\\end{align*}$
This model outputs $[C_1,\dots, C_d, V_{d+1}]$ with input $[C_1,\dots, C_d, V_1]$, where all phrasebooks are provided in-context. Please refer to algorithms 6 and 7 on how we modeled the computation above using self-attention, MLPs, and residual connections in a standard transformer.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the clarifying response.
The clarifications along OOD generalization capabilities are welcome. My concern was somewhat different, it was about generalizing to new unseen codebooks. Admittedly this is somewhat different from standard notion of distributional generalization in ML, and my phrasing could have been better.
The explanation on Theorem 5.3 clears my doubts. The connection between surrogate models and mechanistic insights is interesting. More convincing argument that relates the mechanistic structure recovered to the $C_i$ structure used in Def G.4 is required.
I will maintain my positive score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your quick response. We will address your additional concerns as follows.
---
### OOD Generalization to Unseen Phrasebooks
> My concern was somewhat different, it was about generalizing to new unseen codebooks
**Response:** Our evaluation measures the performance of the model when not given information on the phrasebooks in-context. Because of our setup, an $MLT_{\Pi*}$-capable model will trivially fail on tasks involving unseen phrasebooks. On the other hand, our experiments with 20% dropout shows a non-trivial generalization ability with CEL. Rules in phrasebooks need not be dropped together, while the model can compose all the phrasebook rules during evaluation.
There may be other evaluation strategies that can shed light on additional forms of out-of-distribution generalization, such as composition with entirely unseen phrasebooks, but we leave these directions for future work.
---
### Mechanistic Bases of Parameterizing $C_i$’s
> More convincing argument that relates the mechanistic structure recovered to the
$C_i$ structure used in Def G.4 is required.
**Response:** Thank you for raising this point! In our theoretical analysis, we only use the fact that $C_i$’s are operators mapping 2-tuples in the current intermediate sequences to the following intermediate sequences in the representation space. In the Surr-MLT model we parameterized the sequence representations in one-hot encoding, so the operators are parameterized as permutation matrices as in Def G.4. While we do not expect the exact same parameterization to exist in the Llama models, **the identical mechanistic process (i.e. in-context representations of phrasebooks guiding the translation step for every 2-tuple) can be uncovered in the Llama models by analyzing the attention pattern from output tokens positions to in-context phrasebook token positions.**
In particular, we consider the same MLT-ICL-capable model as in figure 3 and visualize the relevant attention patterns [in this new figure](https://ibb.co/pBFQ4jfv). For each translation step, we take the layers accountable to that step as discovered in section 3.3 and consider the attention pattern from the relevant output tokens positions of that step (segments of <THINK> abstract tokens or the final output sequence) to the relevant phrasebook tokens positions of that step in context. The visualization we present for each step is averaged across the attention heads in the selected layers.
For all intermediate steps, we can see that the specific layers sparsely attend to the in-context token positions. Moreover, each 2-token segment of the output representation attends to the correct relevant phrasebook entry, effectively guiding the mapping from the representation of the current intermediate sequence to the next intermediate sequence. Please see the annotated attention pattern (what phrasebook entries are involved) and their exact correspondence to the next intermediate sequence as provided in the subfigure titles.
We will add this discussion to the future versions of the paper to better support the parameterization of the surrogate model. | null | null | null | null | null | null |
Density Ratio Estimation with Conditional Probability Paths | Accept (poster) | Summary: This paper introduces a new method for density ratio estimation called conditional time score matching (CTSM). CTSM estimates the time score along a probability path connecting two densities instead of directly estimating the ratio of two densities. By conditioning on additional variables, the authors propose an easy-to-estimate objective and a faster variant. They further provide the theoretical guarantee of error bounds and then show strong empirical results, especially in high-dimensional settings.
## update after rebuttal:
I'm maintaining my score.
Claims And Evidence: Yes, the claims are supported by clear theoretical analysis and empirical results.
Methods And Evaluation Criteria: The method and chosen benchmarks clearly evaluating the improvements over existing methods. The evaluation criteria follows prior work of Rhodes 2020 and Choi 2022 and they make sense for the problem.
Theoretical Claims: I review the key theoretical results and proofs (mainly section 5) in the paper. The proofs are correct and rigorous.
Experimental Designs Or Analyses: The experimental design in this paper is sound and consistent with prior works. However, in Table 1, the evaluation is limited with only a pretrained Gaussian normalizing flow. The paper could benefit from additional experiments from prior work. Specifically, the paper could include experiments using Copula interpolation and RQ-NSF interpolation following [Choi 2022] Table 1.
Choi 2022. Density Ratio Estimation via Infinitesimal Classification.
Supplementary Material: Yes, I reviewed appendix B,C,D, F. The supplementary material is comprehensive, well-organized, and provides enough details.
Relation To Broader Scientific Literature: This paper contributes to the broader research on density ratio estimation, which has numerous application in machine learning.
Essential References Not Discussed: The current set of references is robust.
Other Strengths And Weaknesses: See "Experimental Designs Or Analyses" for additional experiments.
Other Comments Or Suggestions: Not applicable.
Questions For Authors: Not applicable.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the very positive assessment. Below we respond to the comment on the MNIST experiment.
We indeed showed results only for pre-trained Gaussian NF, since it was the most computationally stable. We have some experimental results also for Copula and RQ-NSF utilizing essentially the same score network as in [1] but found the optimization to be unstable.
Moreover, after the submission we also experimented more with directly solving the same problem in the ambient space itself using a different score network, instead of using any pre-trained flow and its latent space. Whereas previous methods reported that using a pre-trained flow was important to obtain good results (see [1, 2]), our method works extremely well in the ambient space scenario, reaching a BPD of 1.03, outperforming any of the previous results utilizing a latent space. We also checked that the learnt time score in ambient space was capable of generating reasonable samples using two sampling methods: annealed MCMC and the Probability Flow ODE from [3]. We will also add this result.
[1] Choi et al., Density Ratio Estimation via Infinitesimal Classification, AISTATS 2022
[2] Rhodes et al., Telescoping Density-Ratio Estimation, NeurIPS 2020
[3] Song et al., Score-Based Generative Modeling through Stochastic Differential Equations, ICLR 2021 | Summary: The authors tackle the problem of estimating the ratio of two probability densities, improving upon speed and accuracy of prior work. They also establish a theorem on a guarantee of the error in the estimated density ratio. The method applies the "marginalization trick" (a la flow matching) to make the learning objective tractable. Essentially a latent variable is introduced to the stochastic interpolant between the test and target densities, and object modified to match the expectation of score matching marginalizing out thelatent. The distributions are chosen in a clever way similar to previous literature so that the objective is tractable (so called Conditional Time Score Matching objective). The authors also introduce a variant where a multivariate test and target distributions are broken up into n terms in an autoregressive fashion and the terms matched separately. A novel contribution is the design of a weighting function, time score normalization, to stabilize training. This could become a standard statistical method.
Claims And Evidence: Claims in the paper are supported by ample ablation studies and other experiments (with uncertainty estimates).
Methods And Evaluation Criteria: Proposed evaluation makes sense for the problem at hand and are quite comprehensive.
Theoretical Claims: I didn't review the proof for Theorem 4 and Proposition 5, which are in the supplementary materials.
Experimental Designs Or Analyses: Experimental design is sound.
Supplementary Material: N/A
Relation To Broader Scientific Literature: Estimation of density ratios is a fundamental problem in generative modeling with many applications, as the paper describes.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: I think the paper would benefit from some discussion on how this work isn't just a straightforward application of the ideas behind flow matching. Is this something you would consider?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your positive assessment. We agree that our learning objective (CTSM) is similar to the learning objective in flow matching (Eq 5, [1]). However, our method is not a straightforward application of flow matching given that, fundamentally, we learn a different quantity. Flow matching learns a velocity field while we learn a time score.
Furthermore, our vectorized learning objective (CTSM-v) is a novel contribution of our work that is essential for high-dimensional experiments (see Fig 2). In fact, our CTSM-v objective has no obvious counterpart in the flow matching literature, to the best of our knowledge. Moreover, our time score matching weighting scheme is tailored for learning the time score and is different from weighting schemes used in flow matching (Eq 5, [1], i.e. a uniform weighting).
We are happy to include these clarifications in the main text.
[1] Lipman et al., Flow Matching for Generative Modeling, ICLR 2023 | Summary: This paper proposes the conditional time score matching (CTSM) objective, which is a variant of the time score matching (TSM) objective proposed in the recent density ratio estimation literature.
The CTSM objective provides a principled way to detour the computational drawback of TSM, which requires two automatic differentiation steps. The idea is to introduce a conditioning variable z, and the mathematical trick is essentially same as that of denoising score matching. The authors provide a vector variant that can be useful in practice.
They also analyze the quality of the estimated density $\hat{p}_1(x)$, where the integration of the time score is approximated by a discretization, assuming that $p_0(x)$ is given.
The experiments support that CTSM can perform as well as TSM with less computation.
Overall, it is a well-written paper with well-executed analyses and experiments.
## update after rebuttal
I thank the authors' response. I will keep my positive score.
For the final version, it'd be helpful if the authors further comment on the choice of KL divergence for the analysis and its limitation.
Claims And Evidence: The theoretical claims are solid. Experimental validations are also thorough showing the benefit of CTSM compared to TSM.
Methods And Evaluation Criteria: The proposed method is a very natural yet effective modification of TSM leading to computational efficiency.
The evaluation criteria in experiments seems also adequate.
Theoretical Claims: I checked the proofs of the statements, but only skimming over the proof of Proposition 5.
Experimental Designs Or Analyses: The experiments are solid enough to demonstrate the benefit.
Supplementary Material: I checked Appendices B, D, and a part of E.
Relation To Broader Scientific Literature: Density ratio estimation in general is a key task in machine learning.
Recent years density ratio estimation based on infinitesimal classification has received attention as a solution to the so-called "density chasm" problem. The paper's idea is not a rocket science, but it provides a very thorough analysis on the benefit of conditioning both in theory and practice.
Essential References Not Discussed: References seem adequate.
Other Strengths And Weaknesses: It was a pleasant read overall and I do not have any specific comment on weakness.
Other Comments Or Suggestions: N/A
Questions For Authors: - In theoretical guarantees, is there a particular reason for the KL divergence considered in the analysis? I am just curious if this is a convenient choice for the ease of analysis, or something else. Can this relationship be translated to, for example, the expected value of the square of the difference in the log density ratios?
- Is the last term a typo in Eq. (45)?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you for the very positive assessment and for spotting the typo in Eq. 45 which is now corrected. Regarding our theoretical guarantees, we chose the KL divergence out of convenience for the derivations in Appendix D.3. A similar analysis could indeed be used to bound the expected value of the square of the difference in the log density ratios. To do so, we would start from Eq. 81 and replace the log densities with log density ratios. | null | null | null | null | null | null | null | null |
AuPair: Golden Example Pairs for Code Repair | Accept (poster) | Summary: The paper introduces Aupair, which is a customed method that generates golden example pairs for enhancing code repair performance. In their work, each pair contains an initial guess and its fixes, which are used as in-context example at inference time to generate a repaired solution. During inference call, the fix with best score are selected as the output. The authors claim that this approach not only greatly outperforms methods like best‐of-N and self-repair on 5 LLMs and 7 datasets, but also scales better with on inference-time compute.
Claims And Evidence: Claim: Aupair significantly improves code repair performance on different models and datasets, as well as scaling.
Evidence: Via experiments on the 7 datasets and the 5 LLMs, where the aupair shows better performance on different comparing metrics.
Methods And Evaluation Criteria: The method is about generating guess-fix pairs and select a diverse subset by using their method. The evaluation seems good as it covers repair quality as well as diversity. So it make sense to me.
Theoretical Claims: There seems no theoretical proof.
Experimental Designs Or Analyses: The design of experiments seems reasonable to me, as it contains a good validation frame and across different LLMs. However, a more detailed discussion of parameters selection could benefit the paper more.
Supplementary Material: Supplements contains details about the experiments, with formulas and pseducodes, prompts, for the pair generation. It also have additional experiment reports for evaluation across different bench marks.
Relation To Broader Scientific Literature: The paper's Aupair is based on best-of-N, self-repair, and in-context learning. The main contribution seems to be a prompt-engineering method to improve the LLM's performance
Essential References Not Discussed: No.
However, regarding the code generations, it might be good if the authors can discuss related works that uses adversarial approaches for reference.
Other Strengths And Weaknesses: Strong empirical results based on the datasets and LLMs the author presented.
Weakness:
Only unit test scores as the metric to evaluate correctness of code repairs.
Does not has good analysis on failure cases or situation that the method not working well
Limited theoretical part
Other Comments Or Suggestions: Consider include a more comprehensive ablation studies so we know the impacts with more details.
Consider discuss potential avoid-situation that may cause failures or lower performance.
Main concern: You didn't mention about the code availability, please ensure to make them available for reproducibility.
Questions For Authors: Have you tested the sensitivity of the input? Is the unit score the only metrics you used? have you considered better assessment of code quality?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: 1. discuss related work that uses adversarial approaches for reference.
We will include references to adversarial approaches such as [1,2] in the Related Work section.
2. Only unit test scores as the metric to evaluate correctness of code repairs
In this work, we have used unit test scores for assessment of fix quality and for guiding the exploration phase, such as has been done in prior work [3,4,5]. However, there is nothing fundamentally in the approach that prevents the use of other feedback mechanisms, such as reward models, or other sources of feedback to boost its performance. We leave this set of experiments for future work since we believe it is orthogonal to the key ideas presented.
3. Does not has good analysis on failure cases or situation that the method not working well
We refer the reviewer to point 1 in our rebuttal address to reviewer oGUo: we provide concrete failure modes in which using AuPairs can lead to fixes that are potentially worse than the initial guesses, and this is observed more frequently for AuPair than other baselines on average, since AuPair is a diversity-boosting algorithm.
4. Is the unit score the only metrics you used?
No, in addition to the test pass rate for unit tests, we also report the strict accuracy metric [6], which is the percentage of test problems that are fully solved. We report this metric for every single ablation in the paper; we refer the reviewer to Section A.2 in the Appendix, as well as Figs. 10, 11, 12 for plots showing this metric.
5. Consider discuss potential avoid-situation that may cause failures or lower performance.
We refer the reviewer to Section A.4 of the Appendix, where we have a detailed discussion on the impact of smaller datasets on the efficacy of our proposed approach. Our results indicate that even though AuPair works well in the small dataset regime, curating the AuPairs on a larger dataset and then applying them to a smaller dataset, even though out-of-distribution, recovers in-distribution performance; it can also potentially lead to stronger scaling, since there will be fewer AuPairs extracted on smaller datasets. Furthermore, we have additional results for this in point 4 of our rebuttal to reviewer oGUo.
---
[1] Adversarial patch generation for automated program repair, Alhefdhi et al. 2023
[2] Learning to repair software vulnerabilities with Generative Adversarial Networks, Harer et al. 2018
[3] Code Repair with LLMs gives an Exploration-Exploitation Tradeoff, Tang et al. 2024
[4] Cycle: Learning to self-refine the code generation, Ding et al. 2024
[5] Teaching LLMs to self-debug, Chen et al. 2023
[6] Measuring coding challenge competence with apps, Hendrycks et al. 2021 | Summary: The paper introduces AuPair, a novel algorithm designed to improve Large Language Models' (LLMs) performance on code repair tasks through inference-time computation. AuPair leverages in-context learning by synthesizing an ordered set of example pairs (called "AuPairs") consisting of initially incorrect code ("guesses") and subsequent corrected code ("fixes"). At inference time, the method provides each AuPair as a one-shot in-context example to guide LLMs toward generating diverse and improved code fixes. The inference-time algorithm to construct highly effective "golden example pairs" for code repair can achieve significant performance boosts over traditional approaches like best-of-N and self-repair.
Claims And Evidence: - The authors claim their proposed approach (AuPair) significantly improves the code-repair capability of Large Language Models (LLMs) by leveraging carefully selected golden example pairs for in-context prompting.
- AuPair demonstrates strong generalization capability
- AuPair significantly outperforms traditional inference-time methods like Best-of-N and Self-repair methods.
- AuPair scales notably better with increasing inference-time compute budgets, yielding substantially higher performance improvements per unit of compute than baselines.
- AuPairs result in more diverse code fixes compared to best-of-N approaches
The claims are well supported.
Methods And Evaluation Criteria: AuPair consists of two phases:
Phase 1: Pair Generation
- Data Collection: Starting with a dataset of coding problems and initial LLM-generated guesses (potentially flawed code), the LLM iteratively generates fixes.
- Candidate Pair Generation: For each sampled problem and its initial flawed guess, the LLM produces improved fixes using a k-shot prompt (with randomly selected pairs from existing candidate set). Generated fixes that outperform the original guess are added to a pool of candidate guess-fix pairs. Imperfect fixes become new guesses to generate additional candidate pairs.
Phase 2: AuPair Extraction
- For each candidate pair and problem in the validation set, the algorithm constructs a 1-shot prompt to evaluate how effectively each pair guides the LLM to repair other problems.
- The LLM-generated fixes are scored against provided unit tests, forming a fix-quality matrix.
- AuPairs are selected in a greedy manner from the candidate pairs based on their incremental contribution to solving distinct problems, ensuring complementarity and diversity.
- This selection iteratively picks pairs that yield maximum additional performance on unsolved problems until further improvements fall below a predefined tolerance threshold.
The paper evaluates the proposed method using seven competitive programming datasets, using multiple models.
These methods and metrics allow for a rigorous evaluation of the algorithm's effectiveness.
Theoretical Claims: The paper does not make explicit theoretical claims or provide formal theoretical analysis
Experimental Designs Or Analyses: The experimental designs and analyses are sound.
Supplementary Material: I have reviewed the supplementary material but did not check all the detail.
Relation To Broader Scientific Literature: The paper builds upon the broader literature which has shown that increasing inference-time computation, without additional fine-tuning, significantly enhances Large Language Model (LLM) performance (e.g., best-of-N, self-consistency, etc). The paper effectively integrates ideas from multiple recent lines of research—leveraging in-context learning capabilities, automated code repair without supervised fine-tuning, and strategic prompting—to deliver significant performance gains, and positions itself well within and advances the state of the literature in automated self-repair for code generation.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
- The paper introduces a practical and impactful approach
- Extensive experiments across multiple competitive programming datasets (CodeForces, AtCoder, CodeChef, etc.)
- Empirical results convincingly show substantial improvements
- Detailed ablation and additional analysis
Weaknesses:
- Limited exploration of failure cases
- The evaluations focus predominantly on competitive programming tasks. It remains unclear whether the impressive results would translate directly into other code repair domains, such as debugging production software (or non-code related domain), where test coverage or complexity might differ significantly.
- The paper's method strongly relies on predefined test cases and evaluations on these tests to judge fix quality.
Other Comments Or Suggestions: N/A
Questions For Authors: How sensitive is AuPair's effectiveness to the size and diversity of the validation set used for AuPair selection?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: 1. Limited exploration of failure cases
While AuPairs have been shown to significantly boost performance, they can occasionally have unintended impacts as well. The following table contains the percentage of CodeForces problems in which some fixes had a decrease in fix score compared to the initial guess. Note that this does not affect any of the performance results in the main paper, since for measuring performance, the best scoring response is selected.
**Model** | **Approach** | **% problems w/ decreased fix score**
--- | --- | ---
Gemini-1.5-Pro | Best-of-$N$ | 10.52
| Self-repair | 7.62
| AuPair | 11.63
GPT-4o-mini | Best-of-$N$ | 20.09
| Self-repair | 11.87
| AuPair | 15.28
Gemini-1.5-Flash | Best-of-$N$ | 9.47
| Self-repair | 22.28 |
| AuPair | 11.79
Gemma-27B | Best-of-$N$ | 14.86
| Self-repair | 9.72
| AuPair | 15.21
Gemma-9B | Best-of-$N$ | 13.16
| Self-repair | 9.38
| AuPair | 13.09
As we can see from the above table, in most cases, using AuPair results in an increase in the number of problems for which a fix is worse than the initial guess. This is to be expected since AuPair is an algorithm that in addition to boosting performance also boosts diversity of the generated responses.
2. The evaluations focus predominantly on competitive programming tasks. It remains unclear whether the impressive results would translate directly into other code repair domains.
Competitive programming tasks are a canonical domain for code repair for two reasons: these tasks are **rigorous** (can be precisely evaluated, cannot be gamed) and they are **hard** (supposed to differentiate between top human coders). Both of these make them appropriate to research and test new methods, the crisp, reliable evaluation of challenging tasks means that even differences of a few percent are meaningful capability improvements. But indeed, competitive programming tasks are not the bulk of what users might request. While we see no reason to doubt that our method will translate to broader, less well-defined, or easier tasks, we do not want to make a strong claim about this transferability; producing that evidence is out of scope for this paper. We will expand on this in the final version of the paper.
3. The paper's method strongly relies on predefined test cases and evaluations on these tests to judge fix quality.
Indeed, in this work, we have used test cases for assessment of fix quality and for guiding the exploration phase, such as has been done in prior work [1,2,3]. However, contrary to other approaches for code repair, where failed test cases are given to the model in the prompt, we do not include test cases in the prompt; we use test cases solely to build the AuPairs.
Furthermore, we would like to highlight that there is nothing fundamentally in the approach that prevents the use of other feedback mechanisms, such as reward models, or other sources of feedback to boost its performance. We leave this set of experiments for future work since we believe it is orthogonal to the key ideas presented.
4. How sensitive is AuPair's effectiveness to the size and diversity of the validation set used for AuPair selection?
This is a great question; we conducted experiments with smaller validation sets to curate AuPairs and report the results below. We also include the results of the random baseline for calibration:
Size of Validation | # of AuPairs | Score (inference budget = 32)
-----------------------|------------------|-----------------------------------------
Random | N/A | 0.383
10% | 32 | 0.403
25% | 52 | 0.418
100% | 144 | 0.438
The larger the validation set, the more distinct complementary improvements can be observed, and hence the larger the maximal set of AuPairs that can be discovered. So larger validation sets make it possible to effectively scale up to more inference compute. However, even just looking at just the top 32 AuPairs (which is apples-to-apples for varying validation set sizes), we find that their quality increases monotonically with the size of the validation set. We have also conducted additional analysis on the impact of smaller datasets, for which we point the reviewer to section A.4 in the paper.
---
[1] Code Repair with LLMs gives an Exploration-Exploitation Tradeoff, Tang et al. 2024
[2] Cycle: Learning to self-refine the code generation, Ding et al. 2024
[3] Teaching LLMs to self-debug, Chen et al., 2023 | Summary: Paper introduces AuPair, an inference time algorithm to improve code repair capabilities of LLM. The core idea lies in first a diverse of generating golden pairs (guess, fix) using an LLM and then using a submodular selection algorithm to identify and generate an ordered set of golden example pairs. During inference, these pairs are used sequentially as 1-shot in-context examples to guide the LLM. The paper illustrates extensive experiments across 5 LLMs and 7 code repair datasets, demonstrating that AuPair consistently and significantly outperforms best-of-N and self-repair
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: NA
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes, Appendix.
Relation To Broader Scientific Literature: While the paper focuses on code repair this can have impact on contributing to more robust and reliable software.
Essential References Not Discussed: 1. Code Repair with LLMs gives an Exploration-Exploitation Tradeoff (Tang et al. NeurIPS 2024)
2. CYCLE: Learning to Self-Refine the Code Generation (Ding et al, OOPSLA 2024)
Other Strengths And Weaknesses: Strengths:
1. Extensive experiments section covering various aspects of the approach (scaling, generalization, multiple datasets)
2. The approach is interesting for improving the program repair capabilities of LLMs without any fine-tuning/training.
Weakness:
1. The evaluation primarily focuses on models from the Gemini family and GPT-4o-mini. It omits other models recognized for strong coding performance (and high rankings on relevant benchmark leaderboards), such as Claude 3.5 Sonnet, GPT-4o, and DeepSeek-V3. Including a more diverse set of models would strengthen the evaluation.
2. All experiments are conducted within the domain of competitive programming. Checking if AuPair could be used for other programming tasks, such as programming-by-example (e.g., using datasets like those explored by Tang et al. in "Code Repair with LLMs Give an Exploration-Exploitation Trade-off"), would be valuable and help generalize the approach.
3. While the ablation study in Section 3.2 using random pairs provides insights into AuPair quality, further ablation experiments could help assess the effectiveness of the greedy selection approach. This could include comparing against randomly ordered AuPairs or exploring alternative selection criteria, such as a curriculum learning-inspired approach (e.g., selecting pairs in increasing order of problem difficulty).
4. Is there a reason why RAG over the candidates dataset (guess, fix) pairs is not considered as a baseline?
Other Comments Or Suggestions: See questions for authors
Questions For Authors: 1. Is it accurate to state that AuPair utilizes a slightly higher computational budget compared to best-of-N and self-repair baselines? AuPair involves LLM calls in two preprocessing stages (dataset creation and pair selection) before the budgeted N inference calls. The budget for the creation phase, ranging from 10,000 to 35,000 calls, is quite substantial.
2. The phrase "in conjunction with high performing" in line 260 is somewhat confusing, given that Gemini-1.5-Pro's initial performance is reported as low.
3. Section 2.2, describing the fix quality matrix, requires a more detailed explanation as it is central to the approach. The necessity of the subtraction step, in particular, is not intuitively clear and needs further justification.
4. Figure 7 suggests a performance drop when AuPairs are generated by a different model, even if the generating model is generally superior. More insights could help explain this phenomenon and the underlying reasons for this cross-model performance variation.?
5. Could there be another analysis on the diversity aspect which includes the type of repairs these pairs help in? For instance, incorrect formatting, syntax, semantic, different types of bugs?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: 1. Including a more diverse set of models would strengthen evaluation
We have included 5 models spanning 3 model families across 7 datasets in the existing results – Gemini, GPT, Gemma; the results clearly indicate that AuPair works across models. We understand the reviewer's concern and agree that showing results on more models would strengthen evaluation, but getting the full set of results with another model within the short rebuttal period would not be feasible. What we can do, however, is include new transfer results with GPT-4o (suggested by reviewer) using AuPairs from GPT-4o-mini and Gemini-1.5-Pro.
**Approach**|**Score on GPT-4o**
---|---
Initial|0.244
Best-of-$N$|0.100
Self-repair|0.374
w/ Gemini-1.5-Pro AuPairs|**0.486**
w/ GPT-4o-mini AuPairs|**0.573**
Even when using AuPairs from other models, we see **11% and 20% absolute performance gain** over baselines, thus solidifying our claim that even for more code-competent models such as GPT-4o, AuPair gives large performance gains.
2. RAG over the candidates dataset pairs is not considered as a baseline
We implemented a RAG baseline, choosing the top 32 pairs for each problem, retrieved from the candidate pair dataset, as the reviewer suggested:
Model|RAG score|AuPair score
---|---|---
Gemini-1.5-Pro|0.379|**0.438**
GPT-4o-mini|0.361|**0.378**
Gemini-1.5-Flash|0.318|**0.352**
Gemma-27B|0.178|**0.214**
Gemma-9B|0.156|**0.198**
We see that across all models, AuPair, which uses a fixed set of in-context examples, outperforms RAG, which requires the entire set of candidate pairs to be compared with each test problem to choose the in-context examples.
3. Another analysis on the diversity aspect which includes the type of repairs these pairs help in
We show a breakdown of the repairs generated into 4 categories: 1) %problems with improved fix score, 2) %problems in which code was reformatted to obey constraints, 3) %problems in which fix was improved by just changing the logic, 4) %problems in which score remained unchanged. Composite changes (formatting + logical) are reported in formatting fixes.
**Model**|**Approach**|**Improvements**|**Formatting fixes**|**Logical fixes**|**Unchanged**
---|---|---|---|---|---
Gemini-1.5-Pro|Best-of-$N$|9.67|1.37|8.45|81.99
|Self-repair|8.50|0.83|7.91|82.48
|AuPair|47.14|50.32|14.11|44.82
GPT-4o-mini|Best-of-$N$|8.66|2.48|7.39|61.92
|Self-repair|12.98|2.71|10.81|60.24
|AuPair|22.34|25.41|6.70|56.46
Gemini-1.5-Flash|Best-of-$N$|8.27|0.05|8.24|82.92
|Self-repair|18.52|0.12|18.52|54.77
|AuPair|24.21|16.12|16.88|65.99
Gemma-27B|Best-of-$N$|11.57|0.49|10.78|77.09
|Self-repair|9.35|0.44|9.13|79.55
|AuPair|18.97|13.90|11.32|69.24
Gemma-9B|Best-of-$N$|16.76|7.48|12.77|72.66
|Self-repair|10.60|0.42|10.50|79.85
|AuPair|20.00|18.16|14.71|69.73
Some insights:
- Since Gemini-1.5-Pro guesses have more formatting bugs, % problems with formatting fixes using AuPair is high.
- AuPair also helps the model repair solutions with logical errors.
- AuPair yields responses that are more diverse than baselines in the test case scores, indicated by the lower value of "Unchanged".
Example of composite fixes:
Guess:
```
def solve(s: str):
n = len(s)
a = int(input()) - 1
b = int(input()) - 1
cost = 0
for i in range(a, b):
if s[i] != s[i + 1]:
cost += 1
print(cost)
```
Fix:
```
def solve(s: str):
n, a, b = map(int, s.split('\n')[0].split())
companies = s.split('\n')[1]
cost = 0
if companies[a - 1] != companies[b - 1]:
cost = abs(a - b)
print(cost)
```
4. Does AuPair utilize a slightly higher computational budget compared to baselines?
This is partially correct: there is a one-time amortised cost that our algorithm incurs to construct the AuPairs, but the cost at test-time is identical. Note that the same fixed set of AuPairs boosts performance across models and datasets (Fig. 6, 7 show out-of-distribution generalisation), indicating that the upfront cost is easily amortised.
5. The phrase "in conjunction with high performing" in line 260 is somewhat confusing
We explain this in line 250: "since the code generated has to adhere to a specific format to allow successful execution, we observe that many initial guesses of generated code fail because they do not obey these conditions". Moreover, our experiments on GPT-4o above also show that the insight stands, since we observe absolute improvements of 11% and 20% using AuPair compared to self-repair (strongest baseline).
6. The necessity of the subtraction step is not intuitively clear
The fix quality matrix contains the score for each candidate pair for each validation problem. After picking the pair with the best mean score, we subtract this score from the matrix because only after removing the previous best pair can we find the complementary next best pair.
7. Missing references
We will include the mentioned references in the Related Work section. | null | null | null | null | null | null | null | null |
Deep Streaming View Clustering | Accept (poster) | Summary: This paper proposes a deep streaming view clustering algorithm (DSVC). It considers the scenario where data is acquired in the form of view streams in clustering tasks. DSVC aligns the prototype knowledge of the current view with the historical knowledge distribution, thereby mitigating the concept drift issue between streaming views. Furthermore, the aligned prototype knowledge guides the current data distribution, which enhances the clustering structure. Experimental results demonstrate that DSVC outperforms state-of-the-art methods.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes. The metrics, including ACC, NMI and ARI, are commonly used in clustering analysis.
Theoretical Claims: N/A. There is no theoretical claim.
Experimental Designs Or Analyses: Yes. I have checked the comparison to existing methods, ablation study and visualization analysis. They well support the claims.
Supplementary Material: Yes. They are dataset description, experimental settings, algorithm pseudocode, and additional supplementary experiments.
Relation To Broader Scientific Literature: Existing multi-view methods assume all data views are collected in advance. However, in practical scenarios, new data views are collected over time. This paper proposes a DSVC method to solve this. This is meaningful and would have a broad impact in literature.
Essential References Not Discussed: [1] is also a recently proposed work to explore the clustering task when data views are collected overtime. This could be introduced and discussed.
[1] Wan et al. Fast Continual Multi-View Clustering With Incomplete Views. TNNLS 2024.
Other Strengths And Weaknesses: Strengths:
1. This paper identifies and mitigates the issue of concept drift in streaming view clustering, which is novel in literature.
2. The proposed method achieves completing results in experiments.
3. The paper is well organized and easy to follow.
Weaknesses:
1. The authors mention the use of a cross-attention mechanism to reconstruct prototypes and features in Section 3.3, but there is no ablation study provided in the experiments to assess the impact of this component.
2. In Section 3.5 of the manuscript, the description of how the Knowledge Guidance Learning (KGL) module enhances the clustering structure need to be clarified more clearly.
3. As shown in Fig. 4, the training results of the first view exhibit significant differences depending on the view streams of the different sequences. Is there any method to reduce the discrepancies in the training results of the initial view?
Other Comments Or Suggestions: Please see Section **Other Strengths And Weaknesses**
Questions For Authors: 1. What advantages does the proposed distribution consistency learning have over conventional contrastive learning strategies?
2. Based on Tables 1 and 2, I noticed a relatively large standard deviation for the Stl10-fea dataset. Is this phenomenon expected or normal?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our submission and provide valuable feedback.
**W1: Ablation study without the cross-attention mechanism**
**AW1:** To validate the effectiveness of the cross-attention mechanism, we conducted an ablation study. As shown in the following table, incorporating the cross-attention mechanism in our work can significantly enhance performance. This is because the cross-attention mechanism enables the prototype knowledge to better fit the data distribution while enhancing the intra-cluster consistency and inter-cluster discriminability of the features. The resulting prototype knowledge and features work in concert with our approach to achieve optimal performance.
### Ablation experiments on eight datasets (with or without cross-attention mechanism)
| Datasets | ALOI-10 | HandWritten | Landuse-21 | Scene-15 | stl10-fea | ALOI-100 | YoutubeFace-sel-fea | ALOI-1000 |
|----------------------|---------|-------------|------------|----------|-----------|----------|---------------------|-----------|
| **Cross-attention mechanism** | acc | acc | acc | acc | acc | acc | acc | acc |
| ✗ | 74.22 | 86.25 | 20.33 | 31.13 | 47.64 | 60.92 | 19.42 | 36.20 |
| ✓ | **86.69** | **95.44** | **28.52** | **45.33**| **63.41** | **80.53**| **25.09** | **58.32** |
---
**W2: In Section 3.5 of the manuscript, the description of how the Knowledge Guidance Learning (KGL) module enhances the clustering structure need to be clarified more clearly.**
**AW2:** Our Knowledge Guidance module encourages samples to converge toward their corresponding prototypes while pushing away unrelated samples. This ensures that samples within the same cluster exhibit higher similarity, while those from different clusters exhibit lower similarity(i.e., intra-cluster compactness and inter-cluster separability.).
**W3: Is there any method to reduce the discrepancies in the training results of the initial view?**
**AW3:** Your question is highly insightful. In the experiments shown in Fig. 4, the first-view data in different streaming sequences originates from different sources, resulting in significant variations in data quality. Consequently, the training outcomes for the first view exhibit corresponding discrepancies. How to effectively exploit the information from the first-view data and enhance clustering performance in the absence of guidance information remains an open challenge. We will further investigate this issue in our future work.
**Q1: What advantages does the proposed distribution consistency learning have over conventional contrastive learning strategies?**
**AQ1:**
Difference: Traditional contrastive learning aims to learn discriminative feature representations by pulling positive sample pairs closer while pushing negative pairs apart. Its primary objective is to enhance feature discriminability. In contrast, our Distribution Consistency Learning (DCL) measures the divergence between the probability distributions of two samples, with the goal of directly minimizing the discrepancy between distributions.
Advantage: Traditional contrastive learning relies heavily on the construction of positive and negative sample pairs, where the quality and quantity of these pairs directly impact model performance. For instance, when there are noisy or low-quality samples (i.e., hard samples), the model can overly focus on these hard samples, which leads to overfitting and a lack of robustness. In contrast, our Distribution Consistency Learning (DCL) only requires measuring the similarity between sample distributions, eliminating the need for explicit positive and negative pair construction. This avoids performance degradation caused by low-quality negative samples, thereby enhancing the robustness of the model.
**Q2: The standard deviation of Stl10-fea data set is relatively large. Is this phenomenon normal?**
**AQ2:** As shown in Tables 1 and 2, the clustering performance on the Stl10-fea dataset exhibits a relatively large standard deviation across almost all methods. This is primarily due to the significant feature quality disparity between different views (it can be seen in Fig. 4), along with the large-scale sample but limited number of views. As a result, it becomes challenging to fully exploit the complementary information among views during training, leading to considerable performance fluctuations across different random initializations for all methods.
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses. They have addressed my concerns. | Summary: This paper identifies the concept drift problem in streaming view clustering, which causes outdated models to fail to adapt to new view data. To address this, the authors employ knowledge aggregation learning to simultaneously reconstruct prototype knowledge from the features and reconstruct features from the prototype knowledge. To mitigate the concept drift between view streams, distribution consistency learning aligns the prototype knowledge of the current view with the historical knowledge, ensuring consistency in the data distributions across the collected views. Finally, knowledge guidance learning is introduced to leverage prototype knowledge to guide data distribution, thereby enhancing the clustering structure of feature representations.
Claims And Evidence: The superior performance of DSVC and its effectiveness in mitigating concept drift are well-supported by extensive experimental evidence, including results across multiple datasets. Improvements in both accuracy and efficiency are clearly documented. However, the theoretical analysis and description of the DSVC mechanism, particularly the explanation of knowledge guidance learning, could be made clearer.
Methods And Evaluation Criteria: The proposed deep streaming view clustering method is well-suited to address the streaming view clustering problem in multi-view clustering. The benchmark datasets and performance metrics used are appropriate for evaluating the effectiveness of the method. Through experimental design, DSVC is compared with existing methods, validating its effectiveness.
Theoretical Claims: The theoretical claims regarding streaming view clustering and the necessity of addressing concept drift between view streams are well-supported. However, the mathematical formulation of DSVC could benefit from greater clarity. Specifically, the definition of how knowledge guidance learning enhances the clustering structure is lacking in terms of formal mathematical work.
Experimental Designs Or Analyses: The experimental design is well-structured, with clear benchmarks and comparisons to other state-of-the-art (SOTA) methods. The paper analyzes the impact of different view stream sequences to validate the effectiveness of the proposed method in mitigating concept drift.
Supplementary Material: The supplementary material has been reviewed, which includes useful appendices, dataset descriptions, experimental setups, algorithm pseudocode, and additional explanations for each experiment, divided into seven sections. These details contribute to a better understanding of the experimental setup, although some sections could benefit from clearer explanations.
Relation To Broader Scientific Literature: This paper positions itself within the context of streaming view clustering, comparing DSVC with other multi-view clustering methods. It highlights the lack of the research regarding streaming view clustering and proposes a novel approach for addressing the streaming view clustering.
Essential References Not Discussed: The paper cites key references; however, it would benefit from a more in-depth discussion on the connection with related techniques for handling concept drift.
Other Strengths And Weaknesses: Strengths:
1. Due to knowledge aggregation learning, the obtained feature representations and prototype knowledge are more representative.
2. This paper employs distribution consistency learning to align the prototype knowledge of the current view with the historical knowledge distribution, effectively mitigating the concept drift issue.
3. The performance significantly outperforms the comparison methods.
4. The experiments are diverse and account for various types of data.
Weakness:
1. In this paper, the Knowledge Aggregation Learning module uses cross-attention mechanisms to reconstruct prototypes and features. This approach appears to have been used in other works as well. Could the authors clarify the novelty of its application in this context?
2. In Section 3.4, this paper proposes using the distribution consistency learning loss to ensure that the reconstructed features do not deviate from the original data distribution when processing the first view. However, no related experiments are presented to validate the effectiveness of this strategy. It is recommended to provide relevant experimental results to support this approach.
3. In Section 3.5, the explanation of why the clustering structure can be enhanced is not very clear.
4. In the last paragraph of Section 4.4, the explanation for why the method can effectively handle large-scale datasets is not comprehensive.
5. In this paper, the training of subsequent views appears to depend on the previously trained views. Would the quality of the first view negatively impact the training of subsequent views if the first view is of poor quality?
6. In Fig. 3 and 4, it can be observed that in certain datasets, there is a sudden performance improvement after training on a particular view. Please explain the relevant reasons.
7. In Table 2 and Fig. 11, it is observed that the performance on the Stl10-fea dataset exhibits significant fluctuations. Is this behavior due to the inherent characteristics of the dataset, or is there another underlying reason?
Other Comments Or Suggestions: The methodology explanation and experimental analysis in this paper require a more detailed and comprehensive discussion from the authors.
Questions For Authors: For related issues in this section, please refer to the Weakness.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **W1: The cross-attention mechanism appears to have been used in other works as well. Could the authors clarify the novelty of its application in this context?**
**AW1:** Other methods, such as ProImp [1], are designed for static multi-view clustering tasks, where the attention mechanism primarily focuses on one-time data reconstruction and clustering. In contrast, DSVC is designed for streaming multi-view clustering scenarios, where view data arrives dynamically over time and is subject to concept drift. The Knowledge Aggregation Learning (KAL) module continuously processes newly arrived views and interacts dynamically with the historical knowledge repository to maintain global consistency in data distribution. The KAL module in this dynamic environment fundamentally differs from conventional objectives.
**Reference:**
[1] Incomplete multi-view clustering via prototype-based imputation, IJCAI, 2023.
**W2:The distributed consistency loss used in the first view of this paper lacks a correlation ablation experiment.**
**AW2:** We have added ablation experiments for the Distribution Consistency Loss (DCL). As shown in the table, incorporating DCL in the first view significantly enhances clustering performance.
| Datasets | ALOI-10 | HandWritten | Landuse-21 | Scene-15 | stl10-fea | ALOI-100 | YoutubeFace-sel-fea | ALOI-1000 |
|----------------|---------|-------------|------------|----------|-----------|----------|---------------------|-----------|
| **DCL loss** | acc | acc | acc | acc | acc | acc | acc | acc |
| ✗ | 79.89 | 88.05 | 26.52 | 42.91 | 44.71 | 78.55 | 21.50 | 50.13 |
| ✓ | **86.69** | **95.44** | **28.52** | **45.33**| **63.41** | **80.53**| **25.09** | **58.32** |
---
**W3: In Section 3.5, the explanation of why the clustering structure can be enhanced is not very clear.**
**AW3:** The knowledge guidance learning loss computes the similarity relationships between prototypes and samples, firstly. Then, it maximizes the similarity between features and their corresponding prototypes while minimizing the similarity between features and prototypes of different classes, thereby reinforcing the intra-cluster feature consistency and the inter-cluster feature discriminability.
**W4: The explanation for why DSVC can effectively handle large-scale datasets is not comprehensive.**
**AW4:** Our DSVC leverages a prototype knowledge base to transfer historical knowledge, which considers only the number of prototypes $K$ and current-view samples $n$ during computation. Thus, the computational complexity of $O(K \times n)$, where $(K \ll n)$. Moreover, the memory overhead during updates is significantly reduced to $O(K \times d)$, where $d$ denotes the embedding dimension. Therefore, our method can effectively process large-scale data under limited conditions.
**W5: Would the quality of the first view negatively impact the training of subsequent views?**
**AW5:** As shown in Fig. 4 and 9, the quality of the first view does not have a significant negative impact on subsequent training. For example, in the HandWritten dataset, despite substantial variations in the quality of the first view, the final results remain relatively stable as more view data is continuously collected. Even when the initial view quality is suboptimal, the overall performance difference remains minimal.
**W6: In Fig. 3 and 4, it can be observed that there is a sudden performance improvement after training on a particular view.**
**AW6:** This phenomenon occurs because different views are collected from distinct sources, resulting in significant variations in feature quality across views. Consequently, during training, when a view with highly distinguishable feature representations (i.e., a view with high inter-class separability) is encountered, the performance experiences a noticeable boost.
**W7: In Table 2 and Fig. 11, it is observed that the performance on the Stl10-fea dataset exhibits significant fluctuations.**
**AW7:**
**Large standard deviation:** This is primarily due to the significant feature quality disparity between different views (feature dimensions are 512, 1024, and 2048, respectively) of the Stl10-fea dataset, along with the large-scale sample but limited number of views. As a result, it becomes challenging to fully exploit the complementary information among views during training, which leads to considerable performance fluctuations for all methods.
**Performance fluctuations:** Our method is sensitive to the parameters $\alpha $ and $\beta $. To intuitively investigate the impact of $ \alpha $ and $ \beta $ on performance, we conducted the sensitivity analysis presented in Fig. 11. In our approach, we set $ \alpha $ and $ \beta $ to 0.001 and 1, respectively, for the STL10-fea dataset.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response, which overcomes my concerns. I maintain my score. | Summary: The paper presents Deep Streaming View Clustering (DSVC), a method designed to tackle concept drift in multi-view clustering with streaming data. DSVC features three modules: Knowledge Aggregation Learning (KAL) for feature extraction, Distribution Consistency Learning (DCL) to align current and historical knowledge, and Knowledge Guidance Learning (KGL) to enhance clustering. It outperforms 12 state-of-the-art methods in clustering accuracy, stability, and scalability on various datasets, making it effective for real-world streaming data applications.
Claims And Evidence: The authors claims there exists the data imbalance of distribution between different view streams, i.e. concept drift problem. Figure 1(a) needs more detailed explanation about the model and evaluation settings. Also, it’s better to explain how this concept drift problem affect the performance in theory rather than just the experimental results.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Eq. 7 needs further explanation about the historical prototype self-similarity term in the denominator. It appears that the probability $Q_i^b$ cannot sum up to 1 if this term exists, and the authors do not explain why this term exists.
Experimental Designs Or Analyses: Yes, the experimental designs and analyses are valid.
Supplementary Material: The appendix gives detailed information about the multi-view datasets, the experimental settings, the pseudo-code of the algorithm, the impact of different view streaming orders, sensitivity analysis of three hyper parameters: numbers of prototype knowledge and the two trade-off parameters in the loss function. And at last, the limitation of this work.
Relation To Broader Scientific Literature: In summary, the paper makes an important contribution by bridging the gap between static multi-view clustering methods and dynamic streaming clustering methods. Its focus on concept drift, distribution consistency, and the alignment of historical and current knowledge opens up new possibilities for more effective clustering in continuously evolving datasets. These contributions are well-aligned with the broader scientific efforts to improve streaming learning and multi-view clustering.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths
- The paper is well-constructed and easy to read.
- The experiments are conducted over 8 datasets and 12 state-of-the-art methods.
Weaknesses
- The authors should include the notation of Z and H in Figure 2 for better understanding.
- The authors claim they might be the first work to reveal the issue of concept drift in the context of streaming view clustering, but the paper lacks thorough investigation to address the problem.
- In section 3.1, P is of the dimension K*d, but in section 3.4, P seems to have the same dimension as B, N*K. Also, it would be better to clarify the dimensions of the later-introduced variable, such as p, h, b in the context for better understanding.
Other Comments Or Suggestions: No.
Questions For Authors: please refer to the weakness part.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the time you have taken to review our submission and provide valuable feedback.
**Q1: Claims And Evidence**
**Q1(a): Figure 1(a) needs more detailed explanation about the model and evaluation settings.**
**AQ1(a):** Figure 1 (a) illustrates that due to distribution imbalance across different views, i.e., the concept drift problem, the performance of different views also varies.
**Q1(b): How does this concept drift problem affect the performance?**
**AQ1(b):** Thank you for your thoughtful comments! Since multi-view data are collected from different sensors or captured from various views, discrepancies in distribution and quality naturally arise among different views. This phenomenon is referred to as the concept drift problem. Existing multi-view clustering methods assume that all views are pre-collected for joint training simultaneously. However, in dynamic environments, due to the mechanisms of data generation, sampling frequencies and transmission speeds vary across different views. As a result, multi-view data is typically collected in a sequential manner (i.e., streaming view), which makes it infeasible to wait until all the views are ready for training. Therefore, data must be processed sequentially in a streaming fashion, training one view at a time. However, due to the presence of concept drift across different view streams, the similarity relationships between different samples and the feature representations of the same sample may vary across views after model training, which is shown in the following image (The image is in the anonymous URL: https://anonymous.4open.science/r/Fig-0D5B). This discrepancy reduces the intra-class feature consistency and the inter-class feature discriminability, thereby degrading clustering performance.
**Q2: Theoretical Claims: The probability $Q_i^b$ cannot sum up to 1 if Eq.7 exists, and the authors do not explain why this term exists.**
**AQ2:** Thank you very much for your insightful comment! The derived $Q_i^b$ from Eq. 7 does not sum to 1. Therefore, we apply the normalization to ensure its sum is strictly 1, making it suitable for the computation of $\mathcal{L}_{d}$. Eq. 7 computes the distribution probability of class-specific prototypes in both historical knowledge and the current view. The complete formulation is as follows:
$$
Q_i^b =\frac{\exp{(S(p_i^v,b_i)/\tau)}}{\sum_{j=1}^{K}\exp{(S(p_i^v,b_i)/\tau)}+{\sum_{j\neq i}^{K}\exp{(S(b_i,b_j)/\tau)}}},Q_i^p =\frac{\exp{(S(b_i,p_i^v)/\tau)}}{\sum_{j=1}^{K}\exp{(S(b_i,p_i^v)/\tau)}+{\sum_{j\neq i}^{K}\exp{(S(p_i,p_j)/\tau)}}}.
$$
Therefore, $ Q_i^b $ can be interpreted as the similarity probability of the $ i $-th prototype knowledge ($ b_i $ and $ p_i^v $) mapped within the historical knowledge base ($ B $). Similarly, $ Q_i^p $ represents the similarity probability of $ b_i $ and $ p_i^v $ mapped within the current view knowledge. We achieve alignment of the current view prototype knowledge with the historical knowledge distribution by minimizing the discrepancy between the $ Q_i^b $ and $ Q_i^p $ distributions.
**W1: The authors should include the notation of Z and H in Figure 2 for better understanding.**
**AW1:** We have optimized Figure 2 accordingly in the revised manuscript.
**W2: The authors claim they might be the first work to reveal the issue of concept drift in the context of streaming view clustering, but the paper lacks thorough investigation to address the problem.**
**AW2:** Through a comprehensive survey and review of related literature, the most comparable methods to our setup are CAC [1], ACMVC [2], LSVC [3], and OBAL [4]. Among them, CAC and LSVC achieve streaming view clustering by maintaining a consensus matrix, while ACMVC and OBAL consider data stream scenarios. However, none of these methods account for the concept drift problem in streaming views, highlighting the novelty of our proposed DSVC. Furthermore, we have conducted a more comprehensive literature analysis to enrich the manuscript. We look forward to further discussions with you.
**Reference:**
[1] Live and learn: Continual action clustering with incremental views, AAAI, 2024.
[2] Continual multi-view clustering with consistent anchor guidance, IJCAI 2024
[3] LSVC: A Lifelong Learning Approach for Stream-View Clustering, TNNLS, 2024.
[4] Online boosting adaptive learning under concept drift for multistream classification, AAAI, 2024.
**W3: The dimensions of P and B need to be clarified.**
**AW3:** We have revised the ambiguous parts of the manuscript to enhance clarity. Specifically, we have updated $\mathbf{B}=\{b_1,b_2,\ldots,b_K\}\in{ \mathbb {R}}^{N \times K}$ to $\mathbf{B}=\{b_1,b_2,\ldots,b_K\}\in{ \mathbb {R}}^{K \times d}$ for better comprehension.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. I have also read the comments from other reviewers. Most of my concerns have been adequately addressed. As a result, I would like to keep my score as '3', leading to acceptance. | Summary: This paper explores a rarely addressed area in multi-view clustering, namely, streaming view clustering. In this work, the authors utilize the Knowledge Aggregation Learning (KAL) module to extract features and prototype knowledge. Subsequently, the Distribution Consistency Learning (DCL) module is employed to mitigate the concept drift problem across view streams. Finally, the Knowledge Guidance Learning (KGL) module is introduced to enhance the clustering structure. Extensive experiments demonstrate the effectiveness of the proposed method.
Claims And Evidence: Yes, the superiority and effectiveness of the proposed method are supported by extensive experimental evidence.
Methods And Evaluation Criteria: Yes, the proposed method effectively addresses the concept drift issue between view streams. The benchmark datasets (e.g., ALOI-10, HandWritten, LandUse-21, etc.) and performance metrics (ACC, NMI, ARI) used are appropriate for evaluating the effectiveness of the method.
Theoretical Claims: Yes, in this paper, the necessity of addressing the concept drift problem in stream-based view clustering is thoroughly supported by relevant literature and experimental evidence provided by the authors.
Experimental Designs Or Analyses: Yes, the paper conducts extensive experiments to demonstrate the effectiveness of the proposed method. By comparing it with state-of-the-art DMVC and SVC methods, the superiority of its performance is highlighted. The effectiveness of the method in mitigating concept drift is further validated through experiments with varying view stream sequences. Additional ablation studies also substantiate the rationality and effectiveness of each component.
Supplementary Material: Yes, I have reviewed all the supplementary materials, including the dataset descriptions, experimental settings, DSVC training algorithm, view stream analysis, prototype knowledge number analysis, parameter sensitivity analysis, and limitations.
Relation To Broader Scientific Literature: This study considers the scenario of data acquisition via view streams under dynamic conditions and compares the proposed method with existing DMVC and SVC approaches. It highlights the gap in the field of streaming view clustering and introduces a novel DSVC method to address these issues.
Essential References Not Discussed: In this paper, each theory and technique is supported by relevant references.
Other Strengths And Weaknesses: This paper demonstrates strong originality by identifying the issue of concept drift in the context of view streams. The proposed method is novel, with the distribution consistency learning module effectively mitigating concept drift between view streams. Comprehensive experiments are conducted to validate the superiority and effectiveness of the proposed approach. However, some deeper explanations are lacking in both the theoretical interpretation and experimental analysis. Further in-depth theoretical and experimental exploration would benefit the paper.
Other Comments Or Suggestions: 1) The authors should clarify the meaning of each symbol used in the manuscript. For instance, in Eq. 4, what does the symbol $d$ represent? Is it distance or dimensionality?
2) The authors are advised to ensure consistent use of capitalization throughout the manuscript. For example, in the first paragraph of Section F in the appendix, $Loss$ is capitalized, whereas it is written in lowercase in other sections.
Questions For Authors: Q1: As far as I know, domain streams and view streams share conceptual similarities, and the authors have mentioned domain stream learning in related work. Could you clarify the distinction between domain streams and view streams?
Q2: In Section 3.3, the authors employ a cross-attention mechanism to learn prototypes. What are the advantages of this approach compared to using prototypes learned through k-means clustering?
Q3: In the last paragraph of Section 4.4, the authors mention that some DMVC methods cannot handle large-scale data under limited computational resources. However, we observe that the stream-view clustering method LSVC also struggles with large-scale data, but the authors do not provide an explanation for this phenomenon.
Q4: During clustering, are the features used those derived from attention-based reconstruction, or the features extracted by the autoencoder?
Q5: Based on Figure 11, we observe that in some datasets (such as Stl10-fea and YoutubeFace-sel-fea), the parameters $\alpha$ and $\beta$ have a significant impact on the results. Could the authors provide an explanation for why this phenomenon occurs?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Firstly, we sincerely thank you for your detailed review and constructive feedback, which have greatly contributed to improving the presentation of our submission.
**S1: For instance, in Eq. 4, what does the symbol d represent?**
**AS1:** Thank you very much for your valuable suggestions. In Eq. (4), $d$ represents the dimensionality of the feature $Z$.
**S2:The authors are advised to ensure consistent use of capitalization throughout the manuscript.**
**AS2:** We have already carefully reviewed the entire paper for consistency in terminology to ensure that the final version does not have such problems.
**Q1: Could you clarify the distinction between domain streams and view streams?**
**AQ1:** **Domain Stream** means that when a learner faces a series of tasks, the data input distribution (domain) for each task may be different, but the set of categories for the tasks remains the same. **View Stream** refers to a scenario where multiple views of the same object arrive sequentially in a streaming manner, one after another. The key distinction between the two lies in the difference of their task: **Domain Stream** primarily tackles domain drift, continual adaptation, and mitigating catastrophic forgetting while maintaining task consistency. In contrast, **View Stream** focuses on addressing challenges such as collaborative learning across multiple views and effectively handling view heterogeneity.
**Q2: What are the advantages of the cross-attention mechanism compared to using prototypes learned through k-means clustering?**
**AQ2:**
**Advantage 1:** **$k$-means** partitions cluster centers based on a fixed distance metric. Its prototypes (i.e., cluster centers) are static and cannot dynamically adjust their distribution according to the data characteristics. In contrast, the prototypes learned through our **cross-attention mechanism** can dynamically adjust based on the data distribution, allowing the prototype knowledge to better fit with the data and become more representative.
**Advantage 2:** The prototypes learned by **$k$-means** are susceptible to the influence of outliers (or noise). In contrast, the **cross-attention mechanism**, through the collaborative optimization between features and prototype knowledge, ensures that the learned prototypes are less affected by outliers (or noise), thereby enhancing robustness.
**Q3: We observe that the stream-view clustering method LSVC also struggles with large-scale data, but the authors do not provide an explanation for this phenomenon.**
**AQ3:** Thank you for your constructive feedback. Our DSVC leverages a prototype knowledge base to transfer historical knowledge, considering only prototypes and current-view samples during computation. This results in a computational complexity of $O(K \times n)$, where $K$ is the number of prototypes and $n$ is the number of samples ($(K \ll n$). Moreover, the memory overhead during updates is significantly reduced to $O(K \times d)$, where $d$ denotes the embedding dimension. In contrast, LSVC exhibits a computational complexity of $O(n^2)$ and maintains historical knowledge via a consensus bipartite graph $\mathbf{Z} \in \mathbb{R}^{k \times n}$, requiring a storage overhead of $O(k \times n)$. As the data scale increases, the computation and update of $\mathbf{Z}$ lead to a substantial surge in memory consumption. Moreover, LSVC is a traditional shallow method, whereas DSVC is a deep learning-based approach. As a result, DSVC is well-suited for handling large-scale datasets, while LSVC struggles to cope with such scenarios.
**Q4: During clustering, are the features used those derived from attention-based reconstruction, or the features extracted by the autoencoder?**
**AQ4:** We perform clustering using the features $ H $ reconstructed through the cross-attention mechanism.
**Q5. Fig. 11, it observe that in some datasets, the parameters $\alpha$ and $\beta$ have a significant impact on the results.**
**AQ5:** Our method exhibits a certain degree of sensitivity to the parameters $ \alpha $ and $ \beta $. To analyze this effect, we conducted a sensitivity analysis, as shown in Fig.11. The results indicate that excessively high or low parameter values adversely impact clustering performance. This highlights the importance of balancing three losses within our model. In our approach, for the STL10-fea dataset, we set $ \alpha $ and $ \beta $ to 0.001 and 1, respectively, and set to 0.1 for both $ \alpha $ and $ \beta $ for the ALOI-1000 and ALOI-100 datasets. For the remaining datasets, $ \alpha $ and $ \beta $ are set to 0.1 and 1, respectively. | null | null | null | null | null | null |
TimeBase: The Power of Minimalism in Efficient Long-term Time Series Forecasting | Accept (spotlight poster) | Summary: This article presents a lightweight model requiring only 0.39k parameters, providing an extremely effective method for time-series forecasting. Extensive experiments have been conducted to verify the effectiveness of such a proposal.
Claims And Evidence: The claims made in the submission are supported by sufficient experiments.
Methods And Evaluation Criteria: The proposed method addresses the problem of efficient time-series forecasting on resource-constrained terminals, and the author has provided a comprehensive evaluation of the model's actual resource consumption and prediction performance.
Theoretical Claims: In Appendix B, the author proves that the convergence lower bound of TimeBase is related to the orthogonality of the extracted temporal patterns, which provides a certain degree of credibility to the model's design.
Experimental Designs Or Analyses: I focused on the section discussing the model's efficiency. In Section 4.3, the author provides detailed efficiency metrics, showing that both runtime and actual computational load are significantly lower than existing lightweight models, demonstrating that TimeBase is indeed extremely lightweight. Additionally, in Section 4.3, the author conducts experiments on efficiency with an ultra-long look-back window. The experimental results show that, even with an extremely long input length, TimeBase remains far more resource-efficient than DLinear, consistently maintaining very low resource consumption.
Supplementary Material: I have carefully reviewed Appendix C, where the author provides detailed experimental setup instructions, greatly enhancing the reproducibility of the paper. The additional experiments, including the Ablation Study, Segment Length Analysis, and Extension to Multi-Seasonality, further demonstrate the effectiveness of TimeBase in time series forecasting.
Relation To Broader Scientific Literature: Currently, there have been some explorations into lightweight time series forecasting [1][2], but these models have only managed to keep the parameter count above 1K. However, TimeBase has made a breakthrough by reducing the parameter count to below 0.4K while maintaining extremely low computational cost, further advancing the development of lightweight time series forecasting.
[1] SparseTSF: Modeling Long-term Time Series Forecasting with 1K Parameters, ICML 2024.
[2] FITS: Modeling Time Series with 10K Parameters, ICLR 2024.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1. Long-term time series forecasting is an important problem, and solving it efficiently has many real-world implications. This paper makes full use of the low-rank characteristics of long-term time series and innovatively proposes a lightweight time series forecasting model, effectively addressing this problem.
2. The efficiency validation of the model is thorough, especially the validation with extremely long inputs, which highlights TimeBase's lightweight advantage under extreme conditions.
3. The paper is well-structured and written, and the methodology is described clearly.
Weaknesses:
1. How TimeBase can be integrated into patch-based forecasting methods is not clearly and specifically described by the authors.
2. The authors set the batch size to 12 in Section 4.3. Is there a specific reason for this choice?
3. The authors only use two linear layers in the design of TimeBase, but from my understanding, both basis extraction and segment forecasting could potentially be achieved using transformers or CNNs. The authors should provide a detailed explanation of why they chose to use linear layers for the model design.
Other Comments Or Suggestions: The authors have chosen to bold the two best results in Table 2, but this does not indicate which method performs the best in terms of prediction accuracy. To improve the readability of the table, I recommend that the authors use different colors to highlight the first and second best results.
Questions For Authors: Q1. I am curious—if the implementation of BasisExtract() is replaced with a more parameterized Attention mechanism instead of the linear layer, would the model’s prediction accuracy improve?
Q2. In the absence of prior knowledge or specific conditions, what methodologies can be employed to determine an appropriate P value for a given dataset?
Ethical Review Concerns: No ethical concerns.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer Pc2P,
Thank you for taking the time to review our work and for your valuable feedback.
---
>**D1. Discussion on Other Related Works**
Thank you and `Reviewer 2K2o` for suggesting a comparison with related work and clarifying our distinctions and contributions. We provide an overall comparison in the table below, followed by a brief analysis and discussion.
||FITS|SparseTSF|TimeBase|
|:-:|:-:|:-:|:-:|
|Focused Domain|Frequency Domain|Time Domain|Time Domain|
|Compression Strategy|Low-Pass Filtering|Downsampling|Basis Extraction|
|Parameters|10.5K|1K|**0.39K**|
|MACs|79.9M|12.71M|**2.77M**|
|Time|35s|31.3s|**20.6s**|
- **FITS** performs interpolation after low-pass filtering in frequency domain. FITS utilizes **frequency domain** feature, while TimeBase only operates in **time domain**, which is a fundamental difference between the two.
- **SparseTSF** performs sparse prediction after local feature extraction, focusing on using TCN to **capture local features**. In contrast, TimeBase leverages the low-rank nature of long-term time series and enhances the encoding of key patterns through basis extraction. In terms of data efficiency, SparseTSF makes predictions **using all downsampled segments**, whereas TimeBase improves efficiency by **extracting only a small number of key basis components**.
---
>**W1. Integration with Patch-Based Methods**
TimeBase can serve as a lightweight complexity reducer for patch-based methods by replacing their feature extraction step with basis extraction and aggregation. Specifically for any patch-based models, after transforming the time series into **N** patches, TimeBase aggregates them into **R** core patches (**R ≪ N**). The model then performs forward computation based only on these **R** core patches, significantly reducing computational redundancy. For example, when integrated into PatchTST, TimeBase reduces the MACs from **14.17G to 1.58G** (**Table 7, Appendix C.2**).
---
>**W2. Batch Size Selection (Section 4.3)**
For efficiency analysis, we set the batch size to 12 for all models to ensure a fair comparison of efficiency metrics. This choice considers that an overly small batch size is impractical, while an excessively large batch size may cause some models to exceed memory limits. Following the comparison method used in FITS, we also set the batch size to 12 when testing the models' peak memory usage.
---
>**W3. Why Use Linear Layers**
TimeBase is designed to avoid unnecessary complexity while maintaining strong predictive performance. CNNs and Transformers are also effective in capturing sequential dependencies, but they introduce higher computational overhead. Additionally, empirical results from methods like DLinear and TimeMixer have demonstrated the effectiveness of linear models in time series forecasting. Therefore, our method leverages **linear basis extraction**, which efficiently captures key temporal patterns while significantly reducing computational cost.
---
>**S1. Table 2 Formatting Improvement**
Thank you for your suggestion. We will update Table 2 by using **different colors to highlight the best and second-best results**, improving readability.
---
>**Q1: Replacing BasisExtract() with an Attention Mechanism**
Thank you for your question. Here, we compare implementing BasisExtract() with Attention() VS the previous Linear(). The Attention-based BasisExtract() first applies a multi-head attention mechanism for global segment feature extraction, followed by a fully connected layer to map the features into a small set of basis components.
The table below (720-input, 720-output on the Electricity dataset) shows their scale comparison, where the computational cost and parameter count of the Attention() version are 3-4 times higher than the previous implementation. The second table presents their prediction performance comparison (720-input, 720-output), indicating that on the Electricity and Traffic datasets, the Attention-based BasisExtract() achieves performance improvements. This suggests that increasing the parameter count enhances the expressiveness of the extracted basis components, leading to higher forecasting accuracy.
BasisExtract($\cdot$)|MACs|Paras
|:-:|:---:|:---:|
**Linear($\cdot$)**|2.77M|0.39K
**Attention($\cdot$)** |8.32M|0.99K
BasisExtract($\cdot$)|Linear($\cdot$)||Attention($\cdot$)||
|-|:-:|-|:-:|-|
Metric|MSE|MAE|MSE|MAE
Electricity|0.207|0.294|0.206|0.297
Weather|0.309|0.331|0.324|0.351
Traffic|0.456|0.298|0.451|0.294
ETTm1|0.413|0.414|0.411|0.412
ETTm2|0.352|0.380|0.348|0.384
ETTh1|0.429|0.446|0.437|0.452
ETTh2|0.400|0.448|0.398|0.443
---
>**Q2: Selecting an Appropriate P Value Without Prior Knowledge**
When no prior domain knowledge is available, **data-driven heuristics** such as Singular Spectrum Analysis (SSA) or Fourier Transform can help estimate dominant periods.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their thoughtful responses to my comments. After thoroughly reviewing the rebuttal, I have no further concerns and continue to recommend acceptance of the paper. | Summary: This submission focuses on designing a simple yet efficient model to tackle complex forecasting problems and presents TimeBase, an ultra-lightweight network for long-term time series forecasting (LTSF). The experiments show that TimeBase has a small number of parameters, low computation and memory usage, and efficient CPU inference speed, which is significantly lower than the lightest DLinear. Moreover, TimeBase can not only function as a time-series forecaster but also serve as an efficient complexity reducer for patch-based methods.
Claims And Evidence: Yes. The lightweight forecasting performance is validated in Sections 4.2 and 4.3, while its ability to serve as an efficient complexity reducer for patch-based methods is demonstrated in Section 4.4.
Methods And Evaluation Criteria: Yes. The proposed method is valuable in LTSF and the effectiveness of the proposed solution is well-supported by extensive validation on real-world benchmark datasets.
Theoretical Claims: Yes. In Appendix B, the author discusses the low-rank characteristics of long-term time series and confirms that the generalization lower bound of TimeBase is related to the learned time series basis, highlighting the importance of a high-quality basis vector space and the necessity of the orthogonal constraint.
Experimental Designs Or Analyses: Yes. I think the experimental design is one of the highlights of this research. In particular, Section 4.3 demonstrates how TimeBase can reduce the complexity of PatchTST. Experimental results show that TimeBase reduces the computation of PatchTST to just 1/9 of its original MACs. This illustrates TimeBase's ability to significantly reduce the resource consumption of any patch-based model without sacrificing performance.
Supplementary Material: Yes. I carefully reviewed Appendix A. Lightweight Forecasting Survey, Appendix B. Effectiveness Analysis of TimeBase, and Appendix C. Experiment Details. The supplementary materials are very comprehensive and strengthen the completeness and persuasiveness of the paper.
Relation To Broader Scientific Literature: I think the most significant contribution of this paper is that TimeBase can greatly reduce the spatio-temporal complexity of any model. As demonstrated in the experiments, it can reduce the computation of PatchTST by at least 63%, significantly improving the efficiency of current time series forecasting models.
Essential References Not Discussed: The discussion of related work is sufficient.
Other Strengths And Weaknesses: Strengths:
1.The paper introduces and validates a novel approach for long-term time series forecasting by using temporal patterns for segment-level prediction rather than point-level forecasting.
2.TimeBase significantly reduces spatio-temporal complexity for any model. As revealed in the experiments, it can reduce the computation of PatchTST by at least 63%, greatly enhancing the efficiency of current time series forecasting models.
3.The paper conducts validation on 21 real-world datasets, covering data from different domains, which demonstrates the generalizability of TimeBase.
Weaknesses:
1.The proposed network is extremely lightweight. I suggest the authors explore real-world low-resource scenarios where TimeBase can be applied, to further deepen the motivation behind lightweight forecasting models.
2.All experiments are currently conducted with a 720 input setting. Considering that different models may have optimal input lengths, could the authors provide a comparison of predictions using a 336 input length to PatchTST? This could further strengthen the experimental results.
Other Comments Or Suggestions: 1. The experimental results reported in Table 3 have inconsistent decimal precision, with some values rounded to two decimal places and others to three. The author should standardize this to three decimal places.
2. On page 5, lines 250-251, the author describes TimeMixer as an LLM-based method, but in fact, TimeMixer is not related to LLM. This should be corrected.
Questions For Authors: I notice that the author made a correction to the 'drop_last' parameter in the code. What would happen if this correction had not been made?
Ethical Review Concerns: No ethical concerns.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer JfQc,
Thank you for your interest in our paper and the constructive feedback you provide. We will address your questions point by point.
---
>**W1. Real-world low-resource scenarios**
Thank you for your suggestion! TimeBase can be applied in various resource-constrained environments, such as edge computing, mobile devices, and IoT-based forecasting. In these scenarios, the ability to handle large-scale time series data with minimal computational overhead makes TimeBase an ideal candidate.
---
>**W2. Comparison with more input length**
Thank you and `Reviewer 2K2o` for suggesting an analysis of different look-back window sizes effect to improve the experimental quality of our paper.
In our original experiment, we choose a fixed length of 720 time steps to maintain consistency with FITS [1] and ensure a fair comparison. Additionally, we provide experimental **MAE comparisons with input lengths of [96, 336, 720]**, **all with a 720-step output length**, demonstrating that TimeBase maintains both its lightweight design and high prediction accuracy across different input lengths. Two interesting observations from this analysis are:
- Most datasets, i.e., Electricity, ETTm1, ETTm2, show improved prediction performance as the input length increases.
- For lightweight models such as TimeBase, SparseTSF, FITS, and DLinear, when the input length is only 96, none of them achieve good prediction performance on the Traffic dataset, but iTransformer and PatchTST could maintain prediction accuracy.
| Model ||TimeBase|||SparseTSF|||FITS|||iTransformer|||DLinear|||PatchTST||
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Input length|96|336|720|96|336|720|96|336|720|96|336|720|96|336|720|96|336|720 |
| Electricity|0.347|0.294 |0.294 |0.357|0.299 |0.300 |0.355|0.299 |0.302 |0.354|0.305 |0.301 |0.385|0.321 |0.309 |0.346|0.298 |0.299 |
| Weather|0.321|0.336 |0.331 |0.328|0.343 |0.345 |0.329|0.343 |0.346 |0.324|0.336 |0.337 |0.363|0.363 |0.359 |0.313|0.335 |0.334 |
| Traffic|0.384|0.310 |0.298 |0.387|0.305 |0.299 |0.386|0.308 |0.311 |0.305|0.354 |0.314 |0.394|0.378 |0.310 |0.311 |0.311 |0.289 |
| ETTm1|0.448|0.421 |0.414 |0.449|0.417 |0.413 |0.449|0.416 |0.417 |0.457|0.436 |0.439 |0.452|0.424 |0.415 |0.44|0.425 |0.421 |
| ETTm2|0.397|0.383 |0.380 |0.408|0.382 |0.380 |0.408|0.383 |0.380 |0.409|0.397 |0.407 |0.526|0.465 |0.433 |0.404|0.386 |0.386 |
| ETTh1|0.461|0.439 |0.446 |0.464|0.446 |0.448 |0.459|0.453 |0.457 |0.501|0.507 |0.532 |0.53|0.516 |0.517 |0.476|0.471 |0.475 |
| ETTh2|0.445|0.436 |0.448 |0.457|0.434 |0.425 |0.451|0.433 |0.423 |0.446|0.447 |0.470 |0.671|0.628 |0.609 |0.442|0.429 |0.434 |
We appreciate your key suggestions for improving the quality of our paper presentation. We will include all results in the revised manuscript to provide more experimental analysis.
---
>**S1. Decimal Precision in Table 3**
Thank you for pointing this out. We will standardize the decimal precision to three decimal places for consistency in the revised manuscript.
---
>**S2. TimeMixer Description Correction**
We appreciate your attention to detail. We will correct the description of TimeMixer on page 5, lines 250-251, and clarify that it is not based on LLM. This will be updated in the revised manuscript.
---
>**Q1. Impact of the 'drop_last' correction**
The correction to the 'drop_last' parameter ensures that incomplete batches at the end of a sequence are excluded during testing. In some previous baselines, setting 'drop_last=True' with a very high batch size led to the discarding of difficult-to-predict samples, resulting in a significant improvement in prediction accuracy. This issue was later pointed out by FITS [1] and TFB [2], helping the time series forecasting field achieve better evaluation of predictive models. TimeBase addresses this bug to provide a fairer experimental comparison.
----
[1] FITS: Modeling Time Series with 10k parameters. ICLR 2024.
[2] TFB: towards comprehensive and fair benchmarking of time series forecasting methods. VLDB 2024. | Summary: This paper proposes an ultra-lightweight framework for long-term time series forecasting, TimeBase, which segments the time series, extracts basis components, and then performs forecasting. Furthermore, TimeBase can serve as a plug-and-play module to reduce the complexity of other patch-based models.
Claims And Evidence: For the claim "TimeBase effectively captures essential patterns and interactions within the time series data, allowing it to retain or even slightly improve accuracy". it does not clearly outline what essential patterns and interactions are being captured.
Methods And Evaluation Criteria: 1. **Is a fixed basis length P sufficient?** Time series data often exhibit multiple periodic patterns, yet the basis length P must be predefined, which may not fully capture these variations.
2. **Limited contribution:** The proposed method builds on SparseTSF and FITS, with its main novelty being basis extraction. This may constrain the paper’s overall contribution.
3. **Distinction between basis extraction and low-rank adaptation:** What differentiates basis extraction from low-rank adaptation in the context of this work?
Theoretical Claims: 1. Theorem 3.1 is similar to the theorem proposed in SparseTSF. However, it only compares the parameter scale with DLinear and lacks a comparison with SparseTSF, which limits the analysis of its relative efficiency and novelty.
Experimental Designs Or Analyses: 1. **Hyperparameter Search for Baselines:** Since the experiments are conducted on new datasets, was hyperparameter tuning performed for the baseline models? If so, can you provide details on the search space used for each baseline?
2. **Inference Time on CPU:** Why is inference time measured on a CPU? Would the performance gap between models be smaller if measured on a GPU? A comparison on GPU could provide a more balanced evaluation.
3. **Redundancy of Figure 4:** In the experiment "Efficiency in Ultra-long Look-back Window," Figure 4 appears redundant, as Theorem 3.1 already presents the parameter scale of TimeBase. Consider whether Figure 4 adds additional insights beyond what is already stated in the theorem.
4. **Impact of Look-back Window Size:** The study lacks experiments analyzing the effect of different look-back window sizes. Since the look-back window size significantly impacts performance, evaluating its influence would strengthen the analysis.
Supplementary Material: I checked the code base and it looks great.
Relation To Broader Scientific Literature: Encouraging time series researchers to fully utilize sequential data and inspiring the development of backbone networks for pre-trained large-scale LTSF models.
Essential References Not Discussed: FITS and SparseTSF are two fundamental prior works that underpin this study, making it important to clarify their methodologies and highlight the distinctions and contributions of TimeBase.
Other Strengths And Weaknesses: Mentioned above.
Other Comments Or Suggestions: Mentioned above.
Questions For Authors: 1. In Figure 1(b), how is the singular value of the entire dataset calculated?
2. In Figure 2, why does the figure not include the latest baseline, SparseTSF?
3. The resolution of Figure 3 is low. Consider improving its clarity.
4. In line 251, TimeMixer is not an LLM-based method.
5. Can you provide the selected segment length P for all datasets?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Dear Reviewer 2K2o,
We greatly appreciate your constructive feedback and have made our best to address your concerns.
---
>**C1. Sentence Explanation**
|Term| Explain|
|-|-|
|Essential Patterns|The compact set of basis components extracted from redundant segments|
|Interactions|Inter-correlations between extracted basis components by PatchTST|
**This sentence (Page 6, Line 316-318) describes the plug-and-play integration of TimeBase into PatchTST.** TimeBase enhances the extraction of compact 'Essential Patterns' and enables a more effective 'Patch Interactions' for PatchTST, which leads to improved accuracy and reduced model complexity (as shown in Table 3 of main text) . We will refine the description to make it clearer and more understandable.
---
>**M1. Fixed Basis Length**
Most forecasting scenarios (e.g., traffic, electricity) feature a single dominant period or overlapping periods (e.g., a weekly period encompassing a daily period). Balancing data efficiency and accuracy, we set the shortest dominant period P to capture key temporal patterns.
Besides, TimeBase can extract various-length basis using different P. In **Appendix C.6**, we show that
$MSTimeBase = \sum_{i} TimeBase(X; P = p_i)$
yields a 0.006 MSE improvement. The minor improvement suggests that the only capturing primary pattern by TimeBase is sufficient for lightweight forecasting.
---
>**M2. Contribution of TimeBase**
||FITS|SparseTSF|TimeBase
|-|-|-|-|
Focused Features|Frequency Domain|Time Domain|Time Domain
Compression Strategy |Low-Pass Filtering|Downsampling|Basis Extraction
Paras|10.5K|1K|**0.39K**
MACs|79.9M|12.71M|**2.77M**
Time|35s |31.3s|**20.6s**
TimeBase simplifies LTSF by extracting basis components. In fact, it has distinct differences from FITS and SparseTSF (Analysis is in `D.1 response to Reviewer PC2P`). Overall, **TimeBase’s Key Contributions** are three-fold,
- **Modeling Techniques**: Propose basis extraction with orthogonal constraints to capture primary patterns and avoid redundant computation from surge of time-series.
- **Lightweight, Efficient and Effective Forecaster**: Achieve ultra lightweight but effective forecasting (0.39K Params, 2.77M Macs).
- **Plug-and-Play Usability**: Act as a complexity reducer for patch-based methods, cutting PatchTST’s MACs to 1/9 of its original one (e.g., 14.17G → 1.58G, **Tab.7, Appendix C.2**).
---
>**M3. Basis Extraction and Low-Rank Adaptation**
The key distinction between them lies in how to handle long-term dependencies in LTSF.
- **Basis extraction** identifies a compact set of representative time patterns from historical data. Unlike low-rank approximation, TimeBase explicitly extracts key basis components and uses them as fundamental units for prediction.
- **Low-rank adaptation**, in contrast, often relies on SVD to approximate data with a reduced-rank structure but does not explicitly extract interpretable components.
---
>**T1. Theoretical Supplement.**
TimeBase|SparseTSF|DLinear|
|-|-|-|
$aT+bL+c$|$(TL)/P^2$|$2TL$|
$a =(R+1)/P,b=R/P,c=R$
TimeBase with **O(T+L)** scale is more efficient compared to SparseTSF/DLinear with **O(T*L)** in LTSF. This will be added in revised version.
---
>**E1. Baseline Settings**
|Old Data|New Data|
|-|-|
|Default setting from official code|1. Search lr in [1e-4, 1e-3, 1e-2]|
||2. Fix & Unify other parameters: dim=64, enc_layers=2|
---
>**E2. Inference Time**
We provide CPU inference time as it is more practical for edge deployment. Thanks for your advice! We now include GPU (A100-80GB) inference times (ms) for 720-output Elec.
|TimeBase|DLinear|Sparse|FITS|iTrans|
|-|-|-|-|-|
|**0.47**|0.58|0.65|1.09|3.3|
---
>**E3. Function of Figure 4**
Theorem 3.1 analyzes model scale theoretically. To intuitively verify TimeBase's efficiency, we visualize three computational metrics (Mem, Time, Paras) under ultra-long look-back windows in Figure 4.
---
>**E4. Impact of Input Length**
Thanks for your advice! We provide comparisons and analysis for input lengths [96, 336, 720] in the `W2 response to Reviewer JfQc`.
---
>**D1. Discussion of FITS and SparseTSF**
Thanks for your valuable advice. We have discussed the differences between TimeBase, FITS, and SparseTSF in the `D.1 response to Reviewer PC2P`, and we will add this discussion in the related work.
---
>**Question**
**Q1.** We apply a sliding window to segment the dataset into (num, patch) samples of 720 input length, perform SVD on each sample, sort the singular values, and then average them to derive the dataset’s typical singular value distribution.
**Q2.** Figure 2 primarily highlights the comparison between TimeBase and DLinear. We have updated SparseTSF (1.0K Parameters, 125.2M Memory, 2.59ms CPU Inference Time, 12.71M MACs) in Figure 2, and you can check it in **our `original anonymous code link` in Abstract**. Thank you!
**Q3-4.** Thank you very much! We will update them.
**Q5.** P=4 (ETTm/Weather) | P=24 (Others) | Summary: This manuscript focuses on improving efficiency in long-term time series forecasting. The proposed framework comprises only two linear layers, yet it achieves superior efficiency compared to recent state-of-the-art methods. Besides, it provides extensive theoretical analysis and plenty of experiments to demonstrate the effectiveness of the proposed method.
Claims And Evidence: Yes. The claims are well-supported by theory analysis and extensive experiments.
Methods And Evaluation Criteria: Yes, the proposed method is validated on several real-world datasets. Its efficiency and accurate forecasting capabilities have a positive impact on long-term time series forecasting.
Theoretical Claims: Yes. In Section 3.3, the authors provide a derivation of the parameter scale for the proposed method, and the derivation demonstrates that the model scale, as well as the input and output lengths, exhibit an extremely linear relationship, which is significantly smaller than that of the DLinear method.
Experimental Designs Or Analyses: Yes, I have carefully reviewed the experimental design and analysis in the paper. I found that both the design and validation of the experiments are thorough, primarily in two aspects: (1) The paper uses 21 widely-used and publicly available real-world datasets, ensuring a sufficient and representative set of experiments. (2) The experimental validation is comprehensive, not only confirming the proposed method's high accuracy in predictions but also providing a detailed analysis of its efficiency, including model training time, memory usage, and computational complexity.
Supplementary Material: Yes, I have carefully reviewed all the supplementary materials, with particular attention to Appendix C.2. Additional Results.
Relation To Broader Scientific Literature: This paper focuses on the design of ultra-lightweight long-term time series forecasting models and achieves remarkably impressive results. I think this research is inherited from the base decomposition, such as [1].
[1] Bonizzi P, Karel J M H, Meste O, et al. Singular spectrum decomposition: A new method for time series decomposition[J]. Advances in Adaptive Data Analysis, 2014, 6(04): 1450011.
Essential References Not Discussed: No
Other Strengths And Weaknesses: **Strengths:**
S1. This paper proposes an extremely minimalistic model for long-term time series forecasting, significantly reducing the model's parameter count and computational complexity. It is an innovative and interesting contribution to the field of time series forecasting.
S2. The paper is well-structured, with a clear and concise methodology, accompanied by good figure illustrations and comprehensive tables, making it highly readable.
S3. The experiments are extensive, with validation conducted on a large number of real-world datasets, including 17 medium-scale and 4 large-scale datasets, thoroughly demonstrating the effectiveness of the proposed method.
S4. The plug-and-play nature of TimeBase significantly reduces the complexity for many patch-based methods, making it a promising area of research.
**Weaknesses:**
W1. Including an analysis that discusses the potential advantages of "Basis Orthogonal Restriction" could further enhance the contributions of TimeBase.
W2. Table 1 provides an efficiency comparison between TimeBase and other forecasting models, including Infer Time (CPU). The specific CPU model used in the experiments should be mentioned in the experimental setup.
W3. Section 3 describes the workflow for univariate time series data. Providing an explanation of how TimeBase can be applied to multivariate time series (MTS) data would help clarify and enhance the workflow description.
W4. I think the authors primarily focus on reshaping time series patterns of a single length. If a dataset contains multiple time series patterns of varying lengths, would TimeBase be well-suited to handle this?
Other Comments Or Suggestions: There is a minor typo to note, that Figure 7 is not referenced in Appendix. I recommend that the authors correct this in the revised version.
Questions For Authors: Q1. Does TimeBase support multiple temporal patterns? Such as Traffic could show daily and weekly periods.
Q2. I wonder how TimeBase can be applied to multivariate time series (MTS) data?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer n697,
Thank you for taking the time to review our paper and for your valuable suggestions to improve its quality.
---
>**W1. Basis Orthogonal Restriction Analysis:**
We appreciate your suggestion. From the perspective of the data space, the orthogonality of the basis vectors enhances its representation power, providing them ability to express as any vector in the data space through linear combination [1]. Therefore, the ts basis components should also be diverse and distinct, preventing the extraction of very single time-series patterns [2]. Based on this, we apply the Orthogonal Constraint. In the time-series space, each time series segment can be viewed as a composition of several orthogonal basis components. The goal of TimeBase is to transform the long-term forecasting task at the time-step level into a task of basis extraction and prediction at the period level, thus enabling an efficient time-series forecasting model (0.39K, 1000 times smaller than DLinear). In the revised version, we will add a detailed analysis discussing the potential advantages of "Basis Orthogonal Restriction."
---
>**W2. CPU Detail:**
Thank you for pointing this out. The CPU used in our experiments is an Intel Xeon E5-2609 v4 CPU (16 cores, 2 sockets, 1.7 GHz base frequency), and we will update the manuscript accordingly.
---
>**W3. TimeBase for Multivariate Time Series:**
Thanks for your question. TimeBase does not consider relationships between variables; instead, it transforms the multivariate time series forecasting task into multiple univariate forecasting tasks. We have explained this in the original manuscript on **Page 3, lines 150-156**, “ Most existing multivariate time series are homogeneous, meaning that each sequence within the dataset exhibits similar periodicity [3]. This characteristic allows them to be organized as a unified multivariate time series. Based on this property, we employ the Channel Independence [4] to simplify the forecasting of MTS data into separate univariate forecasting tasks.”
---
>**W4&Q1. Handling Multiple Time Series Patterns:**
Most forecasting scenarios (e.g., traffic, electricity) exhibit a dominant period or overlapping periods (e.g., a weekly period encompassing a daily period). Therefore, we set the shortest period as the fixed P. For datasets with multiple periods, TimeBase can extract different basis components using various P. In **Appendix C.6**, we demonstrate that
$\text{MSTimeBase} = \sum_{i} \text{TimeBase}(X; P = p_i)$
could achieve a 0.006 MSE improvement on the traffic dataset. The marginal improvement suggests that the primary patterns could been effectively captured by single&primary basis length .
---
>**C1. Minor Typo**
We acknowledge the typo regarding Figure 7 not being referenced in the Appendix. We will correct this and ensure that Figure 7 is properly referenced in the revised manuscript.
---
>**Question**
**Q2.** TimeBase can be applied to MTS data by independently extracting basis components from each time series in the multivariate dataset. Each time series is processed to capture its key temporal patterns, and these components are then aggregated to form a unified forecast.
---
[1] Time series with periodic structure. Biometrika 1967.
[2] Decomposition principle for linear programs. Operations research 1960.
[3] TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis, ICLR 2023.
[4] A Time Series is Worth 64 Words: Long-term Forecasting with Transformers, ICLR 2023. | null | null | null | null | null | null |
Efficient Quantification of Multimodal Interaction at Sample Level | Accept (poster) | Summary: In this paper, the authors presented a novel method for efficient quantification of multimodal interaction for a single multimodal sample. Pointwise mutual information may be negative, so monotonicity over the existing redundancy framework does not hold for single samples; therefore, the authors proposed an alternative method for determining redundancy: dividing the information into two components, where each component adheres to monotaniticity, and redundancy is calculated on each component before summed together. The authors further proposed a lightweight method to compute each component for a single sample. The authors evaluated their new algorithm on both synthetic dataset and real datasets, demonstrating that their method accurately estimates redundancy, uniqueness and synergy single samples. The authors further showed that their method is time-efficient, and can be used to improve multimodal ensemble and distillation.
Claims And Evidence: The claims in the submission are supported clearly with empirical evidence.
Methods And Evaluation Criteria: The proposed methods makes sense.
Theoretical Claims: Checked the correctness of the formulation of the new method (estimating the redundancy by dividing the information into two parts). Seems correct to me.
Experimental Designs Or Analyses: I have verified the soundness/validity of the experiments, including the evaluation over synthetic datasets, real-world datasets, time efficiency experiments, and application experiments (showing that LSMI results in better distillation and ensemble performance). The experiment design are sound.
Supplementary Material: I have reviewed all supplementary materials, including synthetic data generation details, baseline details, and the case study.
Relation To Broader Scientific Literature: Previous works in multimodal machine learning demonstrated the importance of quantifying and extracting unique information from each modality as well as cross-modal interactions. While previous works have successfully quantified multimodal interactions over a data distribution and applied them to improve model performance, this work is the first to propose a method that can quantify multimodal interaction on a single data sample. Being able to quantify single-sample multimodal interaction would significantly improve fine-grained understanding of multimodal data, and this paper has also demonstrated that the proposed method can be used to improve multimodal distillation and ensemble.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The paper is well written and well presented. The explanation of the key idea and methodology is clear.
Other Comments Or Suggestions: Minor problem: it seems like the citation for Liang 2023a and Liang 2023b are referring to the same paper in the bibliography.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the positive feedback and constructive comments on our work. We have carefully addressed the citation issue mentioned in the review, and examine other reference and formualtions carefully. Thank you again for your time and effort.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. My review remains positive.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback. We will carefully incorporate your valuable suggestions to improve our manuscript. | Summary: The paper proposes a novel framework to resolve conflicts between pointwise mutual information (PMI) and Partial Information Decomposition (PID) axioms in multi-modal learning. Traditional PID axioms (non-negativity, monotonicity) are defined for distribution-level redundancy but conflict with sample-level PMI, which can be negative. The authors address this by decomposing PMI into two non-negative components.
Claims And Evidence: The paper’s theoretical claims (axiom compliance, decomposition structure) are well-supported by proofs. And the experiments
Methods And Evaluation Criteria: Yes. Its practical and interpretability claims are well-supported by the experiments on synthetic data, distillation experiments, and dynamic fusion experiments.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes. For example, the experiments on distillation and dynamic fusion task demonstrate the effect of LSMI.
Supplementary Material: No supplementary material
Relation To Broader Scientific Literature: Existing methods focus on the distribution-level interaction quantification, this work extends them to the sample-level interaction quantification.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strength:
Innovative PMI decomposition strategy: Decomposing PMI into r+ and r− to resolve the PID axiom conflict is novel in the field of information theory and PID. Existing work usually avoids the problem of sample-level negative PMI, while this paper directly reconciles the contradiction between theory and practice through decomposition.
Separation of sample and distribution-level redundancy: For the first time, the mathematical treatment of sample-level and distribution-level redundancy is clearly distinguished, which expands the scope of the application of PID.
Weaknesses:
1. Visualization and examples: There is a lack of diagrams (such as the distribution of r+/r− in synthetic data) or examples (such as the redundant decomposition results of specific samples) to assist understanding.
2. How is the method scalable to three or more modes? Is there any experimental analysis?
3. i(x; y) = i+(x; y) − i−(x; y) in the derivation process may also be less than 0. So will there be samples with i(x1;x2; y)<0 in the actual application process? What is the physical meaning behind it?
Other Comments Or Suggestions: see the weaness
Questions For Authors: see the weaness
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for their valuable suggestions. Here are our responses to the questions.
**Q1: Visualization of $r^+$ and $r^-$**
A1: Thank you for this valuable suggestion. To demonstrate the function of the $r^+$ and $r^-$ components derived from the total redundancy $r$, we conducted experiments on a Gaussian Mixture Model (GMM) dataset. We visualize the distributions of $r$, $r^+$, and $r^-$ in the provided [figure a](https://anonymous.4open.science/r/Submission2900-7FB8/Fig_a.png).
As indicated by the distributions, $r^+$ appears to capture the core information consistently present across modalities within a sample, reflecting the minimum entropy along the unimodal marginal distributions (i.e., the most reliably available information from at least one modality). It represents the inherent information content shared by the modalities. In contrast, $r^-$ quantifies the portion of redundancy where individual modalities might struggle or provide conflicting cues regarding the task. It reflects the ambiguity or potential for error introduced if relying on the worse modality.
**Q2: Scalability to three modality**
A2: Thank you for this thoughtful suggestion. Extending interaction analysis on more than two modalities inherently involves quantifying complex interactions, such as the mutual information shared among three modalities and the target task. However, a key challenge arises because the clear theoretical definitions used to decompose two-way interactions into distinct synergistic, redundant, and unique contributions do not readily or uniquely translate to these higher-order cases [1,2].
Given this theoretical limitation, we adopt the widely-used pairwise analysis approach (Appendix C.4 of [3]). This method systematically examines interactions between each pair of modalities. We implemented this approach on the UCF-101 dataset, analyzing the interactions among its three modalities: Vision, frame difference (Diff), and optical flow (OF). The quantitative results of these pairwise interactions are presented in the following Table.
| modalityPair | $R$ | $U_1$ | $U_2$ | $S$ |
|---|:---:|:---:|:---:|:---:|
| Visual_OF | 2.111 | 2.483 | 0.000 | 0.000 |
| Visual_DIFF | 3.474 | 1.121 | 0.000 | 0.000 |
| OF_DIFF | 1.998 | 0.003 | 1.476 | 0.239 |
The pairwise analysis revealed that vision is the strongest modality, as its unique interactions consistently surpass those of other modalities. Furthermore, we observed a primarily redundant relationship between vision and frame difference, while frame difference and optical flow demonstrated notable synergy. This observed synergy likely arises because frame difference and optical flow, when considered individually, carry less task-relevant information and thus benefit significantly from combining their complementary information to achieve better task performance.
**Q3. physical meanings of $i(x_1;x_2; y)<0$.**
A3: Thank you for raising this important question. To build intuition, consider the simpler case of negative unimodal information:
$i(x_1;y) = \log\frac{p(y|x_1)}{p(y)}$.
Negativity implies $p(y|x_1) < p(y)$, meaning that observing modality $x_1$ decreases the likelihood of the target $y$ compared to its prior probability. This suggests $x_1$ provides information that contradicts $y$, a situation potentially arising from mislabeled samples.
For the interaction term, defined as:
$i(x_1;x_2;y) = i(x_1;y) + i(x_2;y) - i(x_1,x_2;y)$
a negative value ($i(x_1;x_2;y) < 0$) signifies that the joint information $i(x_1,x_2;y)$ is greater than the sum of the individual information components ($i(x_1;y) + i(x_2;y)$). This phenomenon represents synergy: the modalities $x_1$ and $x_2$, when combined, provide more information about $y$ (or potentially, more "mis-information" if the individual terms are strongly negative) than would be expected from merely summing their individual contributions.
Regarding the redundancy term $r(x_1;x_2;y)$ discussed in our paper (which forms part of the unimodal information, e.g., $i(x_1;y) = r + u_1$), this can also be negative. This is particularly likely if both individual modalities carry contradictory information about the label (i.e., $i(x_1;y) < 0$ and $i(x_2;y) < 0$). In such cases, the redundant information component may also reflect this contradictory concerning the target $y$.
[1] Tobias Mages et al. "Decomposing and Tracing Mutual Information by Quantifying Reachable Decision Regions." Entropy, 2024.
[2] Paul Williams et al. "Nonnegative Decomposition of Multivariate Information." arXiv, 2010.
[3] Paul Pu Liang et al. "Quantifying & Modeling Multimodal Interactions: An Information Decomposition Framework." NeurIPS, 2023.
---
Rebuttal Comment 1.1:
Comment: For Q1, the author seems to give the visualization results of the entire dataset. However, the reviewer is curious about the visualization results on a single sample, which is the core contribution of the paper. For example, taking an audio-vision sample on the CRAME-D dataset as an example, how do its r, s, and u change when adding noise to the vision modality
For Q2, why not use CMU-MOSEI for verification? It naturally includes three modalities: audio, vision, and text. The frame difference and optical flow seem to be the same modality (the same physical meaning)?
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their valuable suggestions and positive feedback. Here are our responses:
**Q1: Visualization of sample-level interaction with noise.**
A1: Thank you for this valuable suggestion. We have analyzed the impact of noise on interactions at the sample level. Below are results for representative samples, comparing clean conditions to conditions where noise was added to one modality (assumed visual, affecting $\tilde{u}_v$):
| | Clean | | | | Noisy | | | |
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Sample | $r$ | $u_v$ | $u_a$ | $s$ | $r$ | $\tilde{u}_v$ | $u_a$ | $s$ |
| 1 | 0.652 | 0.000 | -0.309 | 0.723 | 0.343 | -0.127 | 0.000 | 0.500 |
| 2 | 0.064 | 0.000 | -0.255 | 0.900 | -0.191 | -0.385 | 0.000 | 1.316 |
| 3 | 0.965 | 0.000 | 0.805 | 0.008 | 1.769 | -1.453 | 0.000 | 1.470 |
| 4 | 0.424 | 0.000 | -1.215 | 2.299 | -0.790 | -0.736 | 0.000 | 2.753 |
From the table above, we can observe that when noise is applied to a single modality, its unimodal mutual information ($u_v + r$) decreases. Additionally, since noise increases the entropy within the modality, introducing greater uncertainty to the visual modality, the calculated values of $r^+$ and $r^-$ also change. This leads to $u_a$ becoming 0 after noise addition, while $\tilde{u}_v$ becomes negative. Our method compares samples that are relatively robust to noise sensitivity (samples 1 and 2) with those that are more sensitive (samples 3 and 4), clearly highlighting the changes in their interactions.
We will add this analysis of sample-level interaction with noise in the revised manuscript.
**Q2: CMU-MOSEI for verification.**
A2: Thank you for your valuable question. As you mentioned, the UCF dataset is characterized by certain correlations among its three modalities (video, optical flow, and frame difference): video provides rich visual information, while frame difference and optical flow are derived from the video modality, modeling and describing the dynamic changes in the video. This experiment effectively demonstrates the interaction effects when different modalities are correlated.
We have also conducted experiments on the CMU-MOSEI dataset (Vision+Audio+Text), which presents a case where the three modalities have weaker correlations but jointly describe the same entity, as shown in the Table below.
| modalityPair | $R$ | $U_1$ | $U_2$ | $S$ |
|:---:|:---:|:---:|:---:|:---:|
| V+T | 0.121 | 0.000 | 0.163 | 0.005 |
| V+A | 0.116 | 0.010 | 0.000 | 0.012 |
| A+T | 0.127 | 0.000 | 0.248 | 0.002 |
Our experimental results reveal that among these three modalities, Text serves as the primary modality with strong unique characteristics, while Vision and Audio modalities exhibit lower discriminative power individually but contribute more synergistic information when combined. We will include these three-modality experiments and a comprehensive discussion of the results in our revised manuscript.
We sincerely thank the reviewers for their thorough review and thoughtful responses. Your suggestions have significantly enhanced the quality of our paper, and we will revise and refine our manuscript according to your valuable feedback. | Summary: The paper tackles an important question of capturing interactions between modalities and proposes a lightweight
entropy-based multimodal interaction estimation approach for efficient and precise sample-wise interaction measurement across various continuous distributions. The authors demonstrate the efficacy of their approach using circuit logic recovery and experimentation with real datasets.
Claims And Evidence: The authors claim that the definition of redundancy at the event (sample) level is clearer and more straightforward than that of uniqueness or synergy. Hence they suggest to obtain redundant information by partitioning mutual information into components relative to the target, allowing for the measurement suitable for redundancy in each component. They name their method Lightweight Sample wise Multimodal Interaction (LSMI).
Methods And Evaluation Criteria: The authors evaluate on synthetic and real world datasets (KS dataset, UCF dataset, CREMA-D dataset, KS dataset) and compare to several other methods (Weighted ensemble vs. LSMI ensemble and PID method). However the conclusions are relatively not impresive:
"The comparison results are shown in Table 2. We observe that our LSMI estimator is largely consistent with the PID-Batch
method. Furthermore, in the KS, Food-101, and MOSEI datasets, our method aligns with the top two highest interactions in terms of redundancy and uniqueness, respectively."
Theoretical Claims: I do not think the paper has any theoretical claims
Experimental Designs Or Analyses: I feel that paper will greatly benefit from a more precise discussion are its contribution. It might help to identify specific cases where their "sample" level redundancy estimation works better and motivate the readers with those cases before providing the numbers. Also it will help to state in Table 1 if lower or higher numbers are better.
Supplementary Material: Nothing is provided
Relation To Broader Scientific Literature: I think the topic of capturing multimodal interactions is extremely important for a broader community that aims to develop MLLMs and align modalities. Understanding interaction between training samples' modalities might improve the training purpose. Unfortunately this paper does not go beyond estimation of interactions.
Essential References Not Discussed: I'm only familiar with L.P. Morency works on capturing multimodal interactions, and those were cited.
Other Strengths And Weaknesses: In overall, it was really hard to follow the paper and its specific contributions. A year ago I was very impressed by the work from Morency's lab on capturing the multimodal interactions, but this paper does not contribute enough on top of the original work (PID algorithm they compare to).
Other Comments Or Suggestions: Please rewrite the paper, clearly stating the contribution and motivate the sample-level estimation of interactions. It will be helpful to see how the findings can be used in downstream tasks like ImageBind or even MLLM training.
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable suggestions. Here are our responses to the questions raised about our paper.
**Q1: The contribution of this work compared to the work from Morency's Lab.**
A1: Thank you for this question. The core contribution of our work, compared to foundational studies from Morency's Lab [1], is the development of efficient sample-wise quantification of multimodal interactions.
We build upon the same established interaction concepts (redundancy, uniqueness, synergy) common to both [1] and other literature [2,3]. However, the key distinction lies in mathematical modeling of interactions:
Morency's work (PID) [1] typically provides dataset-level aggregate measures of interactions.
Our method enables quantifying these interactions for **each individual sample**.
This efficient sample-level resolution is our main technical contribution and is vital for applications requiring instance-specific insights, such as targeted model integration or knowledge distillation strategies explored in our experiments (Section 4.3). Aggregate measures (PID) cannot readily support these tasks.
Notably, the value of such pointwise measures was recognized as an important next step by Paul and Morency 2023: "*A natural extension is to design pointwise measures... which could help for more fine-grained dataset and model quantification...*" Our work provides a concrete method to realize this, demonstrating its contribution and relevance.
**Q2: Application of LSMI to Downstream Tasks**
Thank you for the suggestion. Our LSMI method enables the estimation of sample-level interactions, which can indeed benefit downstream tasks beyond the model ensemble and distillation examples discussed in the main paper.
To demonstrate this, we applied LSMI to guide the fine-tuning process for a specific task. We used the ImageBind model, which focuses on modality alignment, and fine-tuned it on the Kinetics-Sounds (KS) dataset (Audio+Visual). Specifically, we employed LSMI to quantify the degree of redundancy between the audio and visual modalities for each sample. Based on this metric, we partitioned the KS dataset into two subsets: one containing samples with relatively high redundancy ($R$) and another with relatively low redundancy ($U+S$, comprising samples dominated by unique or synergistic information).
We then fine-tuned the ImageBind model separately on these two subsets ($R$ and $U+S$) and compared the results against fine-tuning on the entire dataset ('alldata'). The performance, evaluated by classification accuracy on KS dataset, is shown below:
| Fine-tuning Set | V+A Performance | V Performance | A Performance |
|-----------------|-----------------|---------------|---------------|
| alldata | 0.8539 | 0.8183 | 0.7270 |
| U+S | 0.8496 | 0.8045 | 0.7289 |
| R | 0.8772 | 0.8241 | 0.7257 |
As observed, fine-tuning specifically on the high-redundancy subset ($R$) identified by LSMI yielded the best performance for the combined modalities (V+A) and the visual modality (V). This suggests that leveraging LSMI to identify and focus on highly redundant samples can be an effective strategy for enhancing model fine-tuning, potentially by reinforcing learning when sufficient information is present in individual modalities.
**Q3: Result state in Table 1**
A3: We appreciate you pointing it out. Table 1 focuses on evaluating the precision of different interaction estimation methods. We utilize synthetic datasets based on logic relations with additive noise, which allows for comparison against known ground truth (GT) interaction values. This discrete logic carries explicit interaction information and has been widely used in previous studies [1,4]. The objective is to assess **how closely each estimator's output matches the GT**. As the results indicate, our method achieves interaction estimates that align more closely with the GT compared to the baseline estimators evaluated. This finding underscores the robustness of our approach, demonstrating its effectiveness in capturing underlying data interactions despite the presence of noise. We have enhanced the description surrounding Table 1 in the revised manuscript to better emphasize this point.
[1] Paul Pu Liang et al. "Quantifying & Modeling Multimodal Interactions: An Information Decomposition Framework." NeurIPS, 2023.
[2] Benoit Dufumier et al. "What to align in multimodal contrastive learning?" ICLR, 2025.
[3] Paul Pu Liang et al. "Multimodal learning without labeled multimodal data: Guarantees and applications." ICLR, 2024.
[4] Nils Bertschinger et al. "Quantifying Unique Information." Entropy, 2014.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed responses to my comments and other comments. In overall the contribution is more clear to me (a direct extension, following Paul and Morency's suggestion). However, I think that a real contribution should come in the application to real world problems. I would assume that the benefit should be from synergy between the modalities, and I'm confused regarding the above mentioned ImageBind experiment, similar to wohR's
Q6: Appreciate the additional experiment. Could you explain the results here? As I understand, the overall performance increases when redundant data is employed, rather than unique and synergistic samples. This seems counter-intuitive.
Therefore, while I slightly raised my rating, I cannot suggest to accept this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable comment. Please find our detailed response below.
**Q: Explanation of ImageBind experiment.**
A: Thank you for your valuable question regarding the ImageBind experiment results. The key reason lies in the contrastive learning mechanism used for fine-tuning ImageBind in our experiments. This method aims to align representations by pulling positive pairs (inputs assumed to represent the same concept) closer together in the embedding space. For this alignment to be effective, the fundamental assumption is that the paired modalities contain consistent or shared information relevant to the alignment goal.
In our setup, we specifically fine-tuned the pre-trained ImageBind model using contrastive learning on subsets of the KS dataset, targeting improved modal alignment for that specific downstream task. We then evaluated the effectiveness of this alignment (and the quality of the fine-tuned features) by training a simple linear layer on the features to perform the task. Within this context, we can analyze data based on modal interaction:
Data with predominantly Redundant interaction: These pairs naturally fulfill the consistency requirement for contrastive learning. Both modalities convey similar or overlapping information relevant to the task. Using these pairs for contrastive fine-tuning effectively strengthens the alignment based on shared semantics, leading to better features for the downstream linear evaluation.
Data with predominantly Unique or Synergistic interaction: By definition, these pairs contain information that is significantly different, inconsistent, or emerges only through combination across the modalities. Attempting to force alignment between these inherently less consistent representations via a contrastive loss introduces noise or conflicting optimization signals. This hinders the specific fine-tuning process focused on strengthening existing semantic alignment, potentially degrading performance compared to using more consistent data or the original pre-trained model.
Our LSMI method serves precisely to identify data pairs exhibiting high task-specific redundancy (information consistency). The experiment demonstrates that selecting such data improves the contrastive fine-tuning outcome for ImageBind because it provides the consistent positive pairs this particular learning method leverages most effectively.
Therefore, the result highlights a specific property of the contrastive alignment technique itself during fine-tuning: it benefits more from informational consistency (redundancy) in the training pairs.
Regarding synergistic interaction, it typically describes cases where task-relevant information emerges from combining modalities, even if the individual modalities are insufficient on their own. Contrastive alignment methods, like those in ImageBind pre-training/fine-tuning, are primarily designed to map existing semantic similarities into a shared space. They are inherently less suited to capturing this emergent information that arises purely from the combination in synergistic cases, as forcing alignment between representations without strong pre-existing semantic overlap can be counterproductive. Our experiments, showing degraded performance when fine-tuning on less consistent data, support this distinction.
Thank you again for your time and valuable feedback. | Summary: The paper presents LMSI, a light-weight framework for estimating redundant, unique and synergistic information in multimodal tasks through sample-level partial information decomposition. While the previous approaches require computationally expensive tasks like distribution estimation, LMSI is cheaper without compromising on the accuracy. Further, LMSI is positive, monotonic and is grounded information theory making it attractive to use. The authors supplement LMSI with many experiments, and against real-world application of model distillation.
#####update after the rebuttal######
I thank the authors for engaging in the rebuttal and trying to answer my queries. However, I still have my reservations about the ImageBind and OOD experiments. The authors make some very bold and interesting claims about the role of synergistic and redundant interactions in the paper. These claims might be even correct but currently stand unsubstantiated. Since the work leaves some critical ends loose, I maintain my initial score.
##############################
Claims And Evidence: Multimodal biases arise from (1) dataset only, (2) model only, and (3) both (referred to as categories below). \
First the authors test the validity of their method on simulated dataset (section 4.1) which falls in the 2nd category. Next, in section 4.2.2 (Dataset Interactions), the authors quote in L354-358 "we select models with the lowest generalization risk, which are considered to better model the underlying distribution, and average their sample-level interactions to represent the dataset-level interactions." I am not convinced by this. While the dataset might have biases, the model might not be necessarily relying on those biases (that is, on the same dataset, we can have both biased and unbiased models [1]). Therefore, section 4.2.2 is actually a dataset+model (category 3). However, the section is misleadingly titled as "Dataset Interactions", and the discussion on model interactions is absent in the paper. Perhaps a better method to achieve sample-level interactions would be to use LMSI on (1) an overfit model or (2) ensemble of models of different families, to ameliorate the effect of model's biases.
References: \
[1] Rawal, Ishaan Singh, et al. "Dissecting multimodality in VideoQA transformer models by impairing modality fusion." ICML 2024
Methods And Evaluation Criteria: Can this method be used for more concrete tasks to validate its correctness? For example, does LMSI corroborate with the findings like in [1] and [2] where the authors explicitly propose debiased datasets and models (that is, decrease the redundant information in the model and dataset?)
References: \
[1] Buch, Shyamal, et al. "Revisiting the" video" in video-language understanding." CVPR 2022. \
[2] Goyal, Yash, et al. "Making the v in vqa matter: Elevating the role of image understanding in visual question answering." CVPR 2017.
Theoretical Claims: Yes, LMSI is grounded in fundamental ideas of information theory
Experimental Designs Or Analyses: The methods section of the paper lacks many details. The implementation details, model details, details of the finetuning experiment are missing.
Supplementary Material: Yes, I went through the entire supplementary material.
Relation To Broader Scientific Literature: The work is quite important and can have a lot of different applications in dataset collection, filtering and interpretability tasks.
Essential References Not Discussed: 1. QUAG [1] which was also used for "multimodal interaction quantification at the sample level for real-world data"
2. Comparison against and discussion of PID [2]
References: \
[1] Rawal, Ishaan Singh, et al. "Dissecting multimodality in VideoQA transformer models by impairing modality fusion." ICML 2024 \
[2] Liang, Paul Pu, et al. "Quantifying & modeling multimodal interactions: An information decomposition framework." NeurIPS 2023
Other Strengths And Weaknesses: **Strengths:**
1. The paper tackles an important task of multimodal interaction quantification
2. LMSI is light-weight and accurate
3. LMSI is grounded in theoretical and well-founded ideas in information theory
**Weaknesses:**
1. Overarching claims about dataset biases without considering model biases (see Claims And Evidence)
2. Lack of confirmatory real-world experiments (see Methods And Evaluation Criteria)
3. No comparison against attention-based methods (early, mid and late fusion strategies)
I believe that the work is headed in the right direction however it could be misinterpreted because of lack of discussions and supporting experiments.
Other Comments Or Suggestions: The supplementary section can be improved with more examples and details.
Questions For Authors: 1. What is the correlation between human judgement and LMSI-predicted interations in Table 2?
2. How does LMSI account for noise/missing information? Labelling process is not perfect, especially for subjective or (semi)-automatic labelling tasks?
3. How to make sense of absolute LMSI score? Can it be consolidated into a single number? Ideally, a measure that has well-behaved bounds. Because currently, I am not sure if u=0.1 is good or bad?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and suggestions. Here are our responses:
**Q1: Model biases in dataset interactions**
A1: We appreciate the reviewer's valuable point. Indeed, quantifying multimodal interactions requires models, making the estimates inherently model-dependent. Consistent with prior work yielding reliable outcomes [1], our method employs fully trained multimodal models. We focused on models demonstrating strong generalization, as we posit these better reflect the underlying data distribution and better modelling the interactions within it.
Following the reviewer's suggestion, we constructed an ensemble integrating several of best-generalizing models to measure interactions. As detailed in [Table a](https://anonymous.4open.science/r/Submission2900-7FB8/Tab_a.png), the interaction estimates derived from this ensemble were highly consistent with those from individual top-performing models, exhibiting negligible differences.
**Q2: Comparison against attention-based methods**
A2: Thank you for this valuable suggestion. We validated LMSI on a Multimodal Transformer with hierarchical attention (from multi-stream to single-stream)[2], where we varied the layer $l$ at which cross-modal fusion was first introduced. A smaller $l$ indicates earlier fusion architecture.
The experiment results are shown in [Table b](https://anonymous.4open.science/r/Submission2900-7FB8/Tab_b.png).
Our analysis revealed that interactions learned from later fusion stages were predominantly characterized by redundancy, whereas interactions learned from earlier fusion stages demonstrated a stronger tendency to capture synergistic relationships between modalities.
**Q3: Understand the absolute scores of LMSI**
A3: Thank you for this insightful question. LMSI components ($r$, $s$, $u_1$, $u_2$) measure information quantity (in nats/bits) from different multimodal interactions. We can utilize relative proportions (e.g., $\hat{r} = \frac{r}{i(x_1,x_2;y)}$) as a bounded, standardized metric. These relative scores ($\hat{r}$, $\hat{s}$, $\hat{u}_1$, $\hat{u}_2$) sum to 1 and provide a normalized interaction view. The optimal values of interaction varies by task - video tasks show high $\hat{r}$, while sentiment analysis favors $\hat{u}$, as shown in Table 2 in the main paper.
**Q4: Comparison with human judgment**
A4: Thank you for this point. Direct numerical comparison is challenging because human evaluations are typically sample-specific and inherently subjective, making it difficult to accurately represent the information quantity across the entire data distribution. Despite these differences, human judgments can provide valuable insights into the perceived relative importance or ranking of interactions within specific instances. Following precedents like [1], we investigated the alignment between LMSI and human perception by focusing on the dominant interaction types identified by both. As shown in Table 2, LMSI demonstrates reasonable consistency with human assessments in identifying top interactions across datasets.
**Q5: LSMI Behavior with Noise**
A5: Thank you for your question. LSMI quantifies task information $Y$ across modalities $(x_1, x_2)$, making it sensitive to information quality. Label noise reduces the multimodal mutual information $i(x_1, x_2; y)$, affecting LSMI's interaction measures. We added noise to image-text pairs [Table c](https://anonymous.4open.science/r/Submission2900-7FB8/Tab_c.png), observing decreased interactions, especially for information-rich samples. This confirms LSMI's sensitivity to noise.
**Q6: Request for concrete tasks**
A6: Thank you for this suggestion. We wish to clarify that our method, LMSI, has already been validated on several concrete tasks: sentiment recognition (MOSEI), action recognition (KS), and sarcasm detection (URfunny).
To further demonstrate LSMI's practical utility, we applied it to guide ImageBind fine-tuning on the KS dataset (Audio+Visual modality). Using LSMI, we quantified sample-wise audio-visual redundancy and partitioned the KS dataset into two subsets: high-redundancy ($R$) and low-redundancy ($U+S$). We then fine-tuned ImageBind separately on these subsets as well as the full dataset ('alldata'), comparing their classification performance. The results, presented in [Table d](https://anonymous.4open.science/r/Submission2900-7FB8/Tab_d.png), demonstrate that our data partitioning strategy based on LSMI analysis can enhance fine-tuning outcomes. While our VQA task experiments are still in progress due to time limitations, these results highlight the potential of LSMI for improving model fine-tuning.
[1] Paul Pu Liang et al. "Quantifying & Modeling Multimodal Interactions: An Information Decomposition Framework." NeurIPS, 2023.
[2] Peng Xu et al. "Multimodal Learning with Transformers: A Survey." TPAMI, 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. I went through the rebuttal and the additional experiments. Some follow-up comments:
Q2: I thank the authors for adding experiments for early, mid and late fusion. The finding should add value to the paper.
Q3: I doubt if the relative proportions would be really informative in comparing the models. The individual values still might be very "off" across different models, and I am not sure if this is a good way of comparing the models.
Q4: Since human evaluations and the proposed method both are based on sample-level statistics, I am unable to understand why is correlation across across the evaluations has not been reported?
Q5: Thanks for adding these results. Could you clarify why did you select samples with u_2 = 0? Perhaps these findings are confounded in unimodal biases of the model?
Q6: Appreciate the additional experiment. Could you explain the results here? As I understand, the overall performance increases when redundant data is employed, rather than unique and synergistic samples. This seems counter-intuitive.
I still believe that the work has its own merits but could benefit from more refinement with zero-shot OOD evaluations.
---
Reply to Comment 1.1.1:
Comment: We truly appreciate your recognition of our work, and we think that the questions you posed can indeed help to polish our core contributions a lot. Here are our point-to-point responses, hoping them can well address your concerns:
**Q3: Metric for comparing models.**
Thank you for clarifying. Our method supports fine-grained model comparison at the sample level by estimating information contributions across interaction types. To compare models A and B, we: (1) compute interaction vectors $V^A = (r^A, u_1^A, u_2^A, s^A)$ and $V^B = (r^B, u_1^B, u_2^B, s^B)$ for each sample, (2) calculate the distance between these vectors, and (3) average distances across samples.
These sample-level differences are utilized in our interaction-based ensemble method (Section 4.3.2), where smaller distances receive higher fusion weights, improving ensemble performance. Results in [Figure b](https://anonymous.4open.science/r/Submission2900-7FB8/Fig_b.png) show interaction similarities between models across CREMAD datasets.
While this provides sample-level analysis, the 'relative proportions' metric discussed previously offers dataset-level summaries of model preferences for interaction types. Our framework thus enables evaluation at multiple granularities: detailed sample-level comparisons between models and characterization of overall model biases in multimodal information utilization.
**Q4: Correlation with human study.**
Thank you for this suggestion. Following the suggestion, we analyzed the correlation between sample-level LSMI scores and corresponding human evaluations on the Food-101 dataset. This dataset is selected as a baseline that achieves high performance on it, and its interaction modeling is relatively accurate. We found strong Pearson correlation coefficients between LSMI and human scores: 0.98 for redundancy and 0.95 for text uniqueness. These results indicate that LSMI aligns well with human judgments regarding interaction qualities.
**Q5: Reason for $u_2 = 0$.**
Thank you for your question. In our Food-101 experiment, the text modality typically contains more information than images and has higher entropy, resulting in most samples having $u_2=0$. We also analyzed samples with $u_2>0$, with results in [Table c](https://anonymous.4open.science/r/Submission2900-7FB8/Tab_c.png) showing how unique image information responds to label noise.
**Q6: Experimental result about ImageBind.**
Thank you for your valuable question. We fine-tune ImageBind using a contrastive loss specifically to demonstrate how LSMI can enhance targeted fine-tuning through strategic data selection. Contrastive learning performs best when aligning modalities with shared information (i.e., 'redundant' data), which improves representation quality and downstream performance. In contrast, with 'unique' or 'synergistic' data, modalities lack sufficient consistency. Forcing alignment between these inconsistent pairs with minimal semantic overlap introduces optimization noise that can degrade rather than enhance representation quality.
LSMI serves as an effective data selection mechanism by precisely identifying samples with high inter-modal consistency (redundancy). By fine-tuning ImageBind using contrastive loss exclusively on the data subset selected by LSMI, we concentrate the beneficial contrastive alignment process on the most suitable samples. Our experimental results convincingly demonstrate the effectiveness of this targeted approach, showing significantly improved performance when fine-tuning on the LSMI-selected subset compared to scenarios involving less consistent data or random selection strategies.
**Q7: OOD evaluation.**
Thank you for suggesting OOD scenarios as a valuable validation approach for our estimator. Following [1], we conducted OOD evaluation using the Multimodal Near-OOD protocol on UCF and Kinetics-Sounds (KS) datasets, training on in-domain (ID) data and evaluating interactions on both ID and OOD data. As shown in [Table e](https://anonymous.4open.science/r/Submission2900-7FB8/Tab_e.png), under OOD conditions, total information decreases, redundancy significantly reduces, while synergy increases. This implies that in OOD scenarios, shared information between modalities becomes less reliable while distributions shift from training data, reducing redundancy. Simultaneously, increased synergy indicates that the model leverages more complementary interactions. This shift from redundancy-dominated to synergy-dominated processing represents a key adaptation mechanism for generalization. We will include these discussion in the revised version.
We sincerely appreciate your valuable suggestions and insightful feedback. Your suggested refinements indeed helped us better polish the value of our paper. We are truly grateful for your assistance!
[1] Dong, H. "MultiOOD: Scaling Out-of-Distribution Detection for Multiple Modalities." NeurIPS 2024. | null | null | null | null | null | null |
Upcycling Text-to-Image Diffusion Models for Multi-Task Capabilities | Accept (poster) | Summary: This paper proposes Multi-Task Upcycling (MTU), a method to extend pre-trained text-to-image diffusion models for multiple image-to-image generation tasks without significantly increasing computational complexity or model parameters. The key idea is to replace the Feed-Forward Network (FFN) layers with smaller FFN experts combined through a dynamic routing mechanism guided by task-specific embeddings. This approach enables the model to support tasks like image editing, super-resolution, and inpainting, while maintaining comparable performance to single-task fine-tuned models with similar computational costs. The MTU method is inspired by Mixture-of-Experts (MoE) models commonly used in language models and adapts them for diffusion models.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The method is well-motivated and evaluation criteria make sense for the image-to-image generation tasks. However, some qualitative examples could better highlight failure cases, especially in challenging scenarios.
Theoretical Claims: No significant novel theoretical claims are made. The routing mechanism and expert combination follow the concept of fine-grained MoE models in the LLM literature and adapt it for multi-tasking in diffusion models.
Experimental Designs Or Analyses: The experimental design is comprehensive, with ablation studies on the number of experts, routing mechanisms, and comparisons to PEFT methods. However, the paper could benefit from analyzing the optimal number of experts (N=4 for SDv1.5 and N=1 for SDXL).
Supplementary Material: The appendix includes dataset details, training and inference details, and qualitative examples.
Relation To Broader Scientific Literature: The paper is well-positioned in the multi-task diffusion models and Mixture-of-Experts (MoE) literature, and promotes the research on multi-task diffusion modeling methods.
Essential References Not Discussed: This paper includes the work related to the key contributions.
Other Strengths And Weaknesses: Strengths:
1. The MTU framework presents a simple yet effective design that minimizes parameter inflation while preserving the model’s multi-task capabilities.
2. The method is well-suited for on-device deployment, addressing an important challenge of diffusion models in resource-constrained environments.
3. The paper includes comprehensive ablation studies.
Weaknesses:
1. The paper provides limited analysis of failure cases, which would help better understand the limitations of the proposed method across diverse tasks.
2. The optimal number of experts is not rigorously analyzed. For example, the observation that MTU with 4 experts outperforms MTU with 1 expert on SDv1.5 lacks a clear explanation.
3. The significant performance degradation of MTU on SDXL with 4 experts compared to 1 expert raises concerns about scalability. Further analysis is needed to clarify why increasing the number of experts negatively impacts performance on larger models.
4. The paper does not explore task interference effects, which could potentially affect the model’s performance when handling multiple tasks simultaneously. Including an analysis of how the presence of certain tasks impacts others would strengthen the evaluation.
Other Comments Or Suggestions: No other comments or suggestions.
Questions For Authors: See Other Strengths And Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank you for your insightful feedback. We answer your concerns below.
**Concerns over scalability of experts in SDXL:** We acknowledge your concern regarding the performance drop observed when increasing the number of experts in SDXL —a point also raised by Reviewer ptLD. In our rebuttal for Reviewer ptLD, we demonstrate that multi-task performance is linked to the width (i.e., expert capacity) and depth of the model. Specifically, reducing the width of the experts in shallow models like SDv1.5 works well, however in the case of deeper models like SDXL, larger experts perform well.
We point out that in Tables 1 and 4 of the main paper, we maintained the overall model size by reducing the hidden dimensions of the experts as their number increased. However, when this constraint is removed (as shown in Table A of ptLD's rebuttal), increasing the number of experts in our MoE setup significantly enhances multi-task performance while remaining more efficient than deploying separate models for each task. These results address your concerns regarding scalability and align with our goal of avoiding parameter inflation as the number of tasks grows.
**Exploring task interference:** We thank the reviewer for raising this important point. In the table below, we present an analysis of how different tasks interact by training various combinations of the four tasks considered in our paper. For this analysis, we trained SDv1.5 with $N=4, G=0.25$ (where $G$ is the ratio of an expert's dimension in MTU to the FFN dimension of the original model) across all task combinations, and we also report the performance of single-task models trained without interference. (Note: In the main paper, the performance of the super-resolution (SR) model is reported for an open-source version that uses image captions to generate high-resolution images. For a fair comparison and to analyze task interference, we trained an SR model with an empty string as the text condition and report its performance accordingly. Please look at Section A.2 in the paper.)
| Tasks | T2I | IE | SR | IP |
|-------------|-----------|-------------|----------|-------------|
| | FID ↓ | I-T Dir Sim ↑ | LPIPS ↓ | I-I Dir Sim ↑ |
| Single-task | 12.9 | 15.4 | 29.3 | 46.5 |
| T2I-IP | 17.9 | -- | -- | 30.0 |
| T2I-IE | 6.9 | 18.5 | -- | -- |
| T2I-SR | 13.4 | -- | 22.3 | -- |
| SR-IP | -- | -- | 36.2 | 30.2 |
| SR-IE | -- | 17.1 | 24.6 | -- |
| IE-IP | -- | 20.6 | -- | 52.6 |
| T2I-IE-SR | 7.0 | 18.0 | 22.7 | -- |
| T2I-IE-IP | 13.5 | 17.1 | -- | 45.2 |
| IE-SR-IP | -- | 16.8 | 25.1 | 35.8 |
| All | 7.2 | 17.2 | 24.8 | 44.0 |
Our summary of task interference from the above table -
- First, we observe that image editing (IE) and image inpainting (IP) are highly compatible tasks, with performance increasing from 44.0 in our MTU model to 52.6 when combined. In contrast, super-resolution (SR) and text-to-image (T2I) appear to be less compatible with IP, likely because T2I and SR require generating new objects, whereas IP focuses on removing objects. Notably, the compatibility between IP and IE may stem from the fact that the IE dataset (InstructPix2pix) includes editing instructions that also involve object removal.
- We observe that T2I and IE are highly compatible, as training them together improves IE performance—an effect also highlighted in Figure 5 in the main paper, where these tasks select the same experts. While SR and IE may be compatible when trained together, adding IP into the mix leads to a significant drop in SR performance. Moreover, when T2I, IE, and IP are trained concurrently, T2I performance declines; however, the strong compatibility between IP and IE ensures that their performance remains high.
- In our multi-task model, where all tasks are trained jointly, we observe that the performance of the T2I, IE, and SR tasks improves compared to their single-task baselines, while performance decreases for IP by 2 points. This indicates that some degree of task interference may be prevalent.
We appreciate the reviewer for highlighting this issue. However, we view reducing task interference as an important direction for future work and will address this as a limitation in the revised version of the paper. | Summary: This paper suggests that upcycling the text-to-image (T2I) diffusion model when adapting it on the multi-task learning. In detail, when fine-tuning T2I diffusion mode with multiple tasks (e.g. SR, Inpainting, T2I generation, Image editing), they argue to utilize Mixture-of-Experts (MoE). For the motivation, they observe that the fine-tuning diffusion model on a specific task leads to significant changes in FFN parameters compared to the other modules such as attention layers. Thus, they divide FFN into multiple experts with a task-specific layer normalization. Experimental results show that the proposed Multi-Task Upcycling (MTU) achieves a better performance compared to other PEFT methods. Notably, they also show that the model w/ MTU surpasses a single expert model fine-tuned on a single task.
Claims And Evidence: - They claim that when applying multi-task learning to the diffusion model (i.e. training diffusion model on several image generation tasks), we need to use multi-task upcycling.
- As evidence, they show the experiment that the parameters of FFN change mostly when fine-tuning the diffusion model to a specific task, and achieving the highest performance compared to the models only training with SA or CA.
- Experimental results also support their claim, as diffusion model trained on multiple tasks with/ MTU surpasses a single expert model on a specific task.
- One major concern is the experiment in Table. 4. It shows a single expert is enough in most cases, thus it degrades the significance of the author's claim, the importance of MoE architecture (instead, a just task-specific normalization is sufficient for multi-task learning in DM). Also, 2 experts in SDv1.5 reports much worse results compared to (1, 4) expert models, and I wonder why 2 experts have such worse performance.
Methods And Evaluation Criteria: - The proposed methods seem to make sense. They introduce a task-specific normalization and task-specific MoE and task-specific input layers. Input layer should be adaptive as the input image would differ as they are trained on different tasks. Also, as they show the FFN mostly changes during fine-tuning the diffusion model, introducing the task-specific normalization and MoE also seems to work well.
- Evaluation metrics are common metrics for each task. For example, they use FID for text-to-image image generation and LPIPS for SR task. These are the most common metrics to measure the generative and perceptual SR capability.
Theoretical Claims: There are no theoretical claims in this paper.
Experimental Designs Or Analyses: - The main table (Table. 2) shows the superiority of the proposed method by suggesting the performance enhancement of the diffusion model trained with multi-tasks surpasses the performance of a task-specific model.
- Figure 5. shows that the different experts are activated when the different types of tasks are given.
- Table 3. shows the significance of the proposed method over the typical PEFT methods such as LoRA.
Supplementary Material: Supplementary materials contain the experimental and implementation details, and also additional examples.
I skimmed them.
Relation To Broader Scientific Literature: The proposed method suggests the technique to enhance the fine-tuning of the diffusion model on multiple tasks. It would be extended to the other tasks like ControlNet, which has multiple types of inputs as conditions.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: As I mentioned, the major issue for me is that the significance of MoE.
In most cases, a single expert is sufficient to achieve the best performance.
Then, why do we need to use MoE instead of just fine-tuning the FFN layer with a task-specific normalization?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful review. We acknowledge that your concerns mainly pertain to Table 4, where we present an ablation study on the number of experts for both SDv1.5 and SDXL. We understand the reviewer is concerned regarding the significance of our method, given that in many cases a single expert ($N=1$) appears to perform adequately. We address your concerns below.
**Experimental setup in Table 4:** Our paper aims to maintain a parameter count similar to that of the original pre-trained model. To achieve this, we reduce the hidden dimension of each expert as the number of experts increases, ensuring that the ratio of an expert's dimension in MTU to the FFN dimension of the original model, remains $G = \frac{1}{N}$.
In Table 4 of the paper, we conducted an ablation study over $N$ (and consequently $G$) to determine the optimal configuration for both SDv1.5 and SDXL. Our results indicate that for SDv1.5, the best configuration is $N=4, G=0.25$. In contrast, for SDXL, the optimal performance is achieved with $N=1, G=1$, suggesting that retaining the original FFN size, with the addition of task-specific layer norms, is sufficient for handling multiple tasks in SDXL. Following this, we conduct an ablation over the size of the experts by removing the parameter constraint in Table 4.
**Ablation over size of the experts:** We hypothesize that for a given model, multi-task performance depends more on the size of the FFNs than on the number of experts. To test this, we removed the parameter constraint and performed an ablation study by varying $G$ from $\frac{1}{N}$ to 1. Table A below presents these results for SDXL, revealing that performance on tasks, particularly IE, SR, and IP, improves as the expert dimension increases. For instance, when $N=2$ with $G=1$ and $N=4$ with $G=1$, the performance in SR and IP improves significantly compared to the model reported in the paper $N=1, G=1$. These findings support our hypothesis. We also see that for a fixed value of $G$, increasing the number of experts improves performance for image-image tasks. Therefore, the MoE architecture becomes especially significant when scaling the model size for SDXL.
Table A: Ablation over number of experts for SDXL
| | Single-task | | $N=1$ | | $N=2$ | $N=2$ | | $N=4$ | $N=4$ | $N=4$ |
|----------|-------------|---|---------|---|---------|---------|---|----------|----------|---------|
| | | | $G=1$ | | $G=0.5$ | $G=1$ | | $G=0.25$ | $G=0.5$ | $G=1$ |
| #Params | 2.6B | | 2.6B | | 2.6B | 3.5B | | 2.6B | 3.5B | 5.2B |
| T2I (FID $\downarrow$) | 4.1 | | 3.9 | | 10.5 | 3.8 | | 12.3 | 11.3 | 3.8 |
| IE (I-T Dir Sim $\uparrow$) | 17.3 | | 20.1 | | 10.4 | 20.0 | | 11.8 | 12.3 | 19.1 |
| SR (LPIPS $\downarrow$ ) | 26.9 | | 26.5 | | 30.4 | 26.3 | | 30.5 | 28.6 | 25.8 |
| IP (I-I Dir Sim $\uparrow$) | 43.2 | | 44.2 | | 39.9 | 46.9 | | 38.6 | 40.9 | 49.3 |
**Analysing the ablation over number of experts for SDv1.5 (Table 4):** For SDv1.5, we determined that the optimal configuration is $N=4$ and $G=0.25$. Models using $N=1, G=1$ occasionally produced unusual artifacts in SR, and while we do not fully understand the cause, the $N=4, G=0.25$ setup consistently avoided these issues. Additionally, training became unstable with $N=8$ or $N=16$ due to the excessive reduction in expert size. The $N=2, G=0.5$ configuration also led to instability, likely because the experts conflicted across different tasks. Although $N=1, G=1$ may face similar task conflicts, its larger expert capacity helps mitigate this problem (similar to SDXL). In contrast, with $N=4, G=0.25$, tasks can select non-conflicting experts (see Figure 5), resulting in stable training.
**Why do we need MTU?:**
- Our results suggest that multi-task performance is linked to the width (i.e., expert capacity) and depth of the model. Specifically, reducing the width of the experts in shallow models like SDv1.5 works well, however in the case of deeper models like SDXL, larger experts perform well.
- However, the optimal expert size and count depend on your compute. If you are limited by resources $N=1, G=1$ may provide good results due to large expert capacity; however, if you can accommodate a larger model, an MTU setup with $N=4, G=1$ will lead to even better performance.
- Importantly, the success of the \(N=1\) setup does not diminish the value of the MoE approach—in fact, our findings show that as the model scale increases, the benefits of MoE become even more pronounced, while still outperforming the alternative of using separate models for all tasks that would require significantly more compute.
We hope this addresses all of the reviewer's concerns, and we thank you for your feedback led to interesting experiments.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for providing a comprehensive rebuttal.
I understand that there would be issues in the size of experts in SDXL experiments.
To thoroughly validate the effectiveness of experts, can you test the identical experiments with a single expert of large hidden size? (e.g. N=1, G=4)
---
Reply to Comment 1.1.1:
Comment: Thank you for your comment. Based on your feedback, we experimented with $G > 1$ to increase the size of the experts relative to the pre-trained model. Specifically, we evaluated SDXL using the configurations $N=1, G=4$; $N=1, G=2$; and $N=2, G=2$, where, as in our main training setup, we only fine-tuned the experts, router, and task-specific layers. We present the results in the Table B below.
| $N$ | $G$ | T2I | IE | SR | IP |
|---|--|--------------|-------------|--------------|------------|
| | |FID ($\downarrow$) | I-T Dir Sim ($\uparrow$)|LPIPS ($\downarrow$)| I-I Dir Sim ($\uparrow$)|
| 1 | 1 | 3.9 |20.1 | 26.5 | 44.2|
|1 | 2 | 6.3 | 19.8 | 26.6 | 44.5|
|1 | 4 | 13.7 | 18.9 | 30.2 | 36.7|
|2 | 2 | 5.8 | 20.4 | 24.8 | 44.7|
Table B: Ablation over the size of experts when $G>1$
The $N=1, G=2$ and $N=2, G=2$ configurations reduce performance metrics for T2I while slightly improving the performance of other tasks. However, compared to the results with a reduced $G < 1$ reported in Table 4 of the main paper, increasing $G$ still preserves the performance of many image-to-image tasks, indicating that our model remains scalable for these tasks. Moreover, with $G=2$, increasing the experts from 1 to 2 boosts the performance of all tasks, suggesting the importance of the MTU setup for multi-task learning in diffusion models.
That said, the MTU model does not perform well with the $N=1, G=4$ configuration for either model, as increasing the model width by four times while keeping the rest of the network frozen appears to negatively impact training stability. We also experimented with training the entire model, but this approach also resulted in unsatisfactory results. We believe this occurs because SDXL is a large pre-trained model trained with a specific FFN size. Modifying the expert size without relatively adjusting the *size* of the rest of the architecture disrupts training stability.
**Is it better to increase the size of the experts or the number of experts?** From our experiments, increasing the expert size beyond that of the pre-trained model (i.e., setting $G>1$) leads to a decline in T2I performance, while image-to-image tasks maintain performance comparable to $G=1$. Moreover, when $G=1$, increasing the number of experts boosts performance across all tasks (see Table A in our rebuttal). We believe this occurs because SDXL, a large model T2I pre-trained with a specific FFN size, is sensitive to changes in expert dimensions. Modifying the expert size without adjusting the size of the rest of the architecture disrupts training stability and adversely affects the pre-training task (T2I). In contrast, image-to-image tasks—learned from the initialization—benefit from a slightly increased $G$ as it adds more capacity, but suffer when $G$ is increased to 4 as other model parameters are not well-tuned for this expert size. For the same reason, image-image performance may be reduced with the shrinking of the expert sizes due to the overall loss of model capacity. It may be that for very large or smaller expert sizes to be effective, SDXL must first be pre-trained with larger FFNs than those currently used. In contrast, shrinking the expert sizes may have worked for SDv1.5 because using a smaller FFN may also result in an optimal pre-trained model. Alternatively, it could be the case that current SDv1.5 has some redundant FFN parameters, therefore, $G=0.25$ works well. Investigating optimal FFN sizes for pre-training is out of scope of this work. Therefore, we recommend maintaining the expert sizes (i.e., keeping $G=1$) and increasing the number of experts for improved performance across all tasks. | Summary: The paper aims to achieve multi-tasking ability for a pre-trained T2I model, via a "lightweight" approach - MTU.
The paper starts from the insight that Feed-Forward Networks (FFNs) receive the most significant change when finetuned to a new task for a pre-trained T2I model.
MTU splits original FFNs into smaller FFN experts and uses a dynamic router mechanism to adapt to new tasks.
## update after rebuttal
My concerns are resolved after the rebuttal. I keep my original rating.
Claims And Evidence: One of the most important conclusions is authors claim that FFNs in the pre-trained T2I models receive the most dramatic change when adapted to downstream tasks. The authors demonstrate this insight by visualizing the deviations from previous weights in Fig. 2, supporting that FFNs are specializing in adapting to new tasks.
Methods And Evaluation Criteria: The methods are intuitive based on the insights in Table 1 and Figure 2. Due to the specialization of FFNs, MTU uses them as expert and use a router as a kind of modulation block to better adapt to different new tasks.
For evaluation criteria, authors evaluate three different tasks, using FID, LPIPS, Directional similarity as metrics. However, for tasks like Super-Resolution, only LPIPS is evaluated while some traditional metrics such as PSNR and SSIM are not considered.
It might be helpful to add a small user study to demonstrate the superiority of MTU.
Theoretical Claims: No theoretical claims are found in the paper.
Experimental Designs Or Analyses: The authors compare in different tasks to demonstrate the performance of MTU and its versatility. MTU achieves the best in most of the tasks. Given that MTU is the only approach handling 4 different tasks, it is promising the proposed techniques are generalizable to different downstream tasks.
The authors also show an analysis of expert in Figure 5, which is helpful to understand the mechanism of router assignment.
Supplementary Material: The authors show more implementation details and more results in the supplementary material.
Relation To Broader Scientific Literature: N/A.
Essential References Not Discussed: N/A.
Other Strengths And Weaknesses: Weaknesses:
- Somehow all the image results shown in the paper are in a low quality. For example, in Figure 2, both cases show a low quality (e.g., blurry background, over-saturated color style) in the first place. Is it because of the pre-trained T2I models' poor performance? So in Figure 3 (wolf) and IE in Figure 4. This becomes a concern that the improvement brought by MTU may be trivial.
Other Comments Or Suggestions: - In Fig. 3, it might be helpful to label each component, e.g., "Input image", "Output image", "Encoder", "Decoder". Also might be more clear if authors can exactly show what the prompt is in Fig. 3.
- In Fig. 1, it might be helpful to indicate which T2I model is used here.
Questions For Authors: Please see my previous sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your positive review, and for recognizing that MTU is the only approach so far to handle 4 different downstream tasks. We address your concerns below.
1. **Quality of image generation:** We believe our MTU model produces images of comparable quality to the base model. While Figure 3 does show some instances of over-saturation in the inpainting (IP) images (left columns, bottom row) for MTU model, the same over-saturation can also be seen in single-task models. The appendix includes several additional examples where this is not an issue. We point to Figure 7 in the appendix, where we show qualitative results of text-to-image generation for SDv1.5 and SDXL, where the quality of the images matches that of the base model. Images selected in the main paper and the appendix were selected at random from the test set.
2. **Clarifications about Figures 1 and 3:** In Figure 1, the top row displays results from SDv1.5, while the bottom row features outputs from SDXL. In Figure 3, the image editing prompt used is "turn into a polar bear."
Please let us know if you have any more questions. Thank you. | Summary: The paper proposes Multi-Task Upcycling (MTU), an extension of pretrained text-to-image models for multi-task on-device deployment. The authors first investigate the differences between fine-tuned weights and pretrained initial weights across different layers in LDM. Based on this observation, they split the Feed Forward Networks (FFNs) into smaller blocks and add components for task-specific processing. MTU achieves state-of-the-art performance for SD 1.5 and SDXL in downstream image synthesis tasks.
Claims And Evidence: 1. The proposed multi-task learning method achieves better performance while preserving TFLOPs and parameters compared to other baselines.
2. Inst-Inpaint can only remove objects, and the authors follow the dataset to train MTU. However, MTU has a lower I-I Directional Similarity than vanilla Inst-Inpaint in SD 1.5. Although the performance of other tasks is improved, inpainting is not. The authors must analyze why this happens.
Methods And Evaluation Criteria: Benchmark and evaluation metrics are reasonable.
Theoretical Claims: This paper lacks any proof or theoretical claims.
Experimental Designs Or Analyses: The experimental designs are valid.
Supplementary Material: I have checked the Appendix material.
Relation To Broader Scientific Literature: The proposed model is more applicable to real-world on-device applications since its number of parameters is almost the same as that of the base diffusion models, yet it can handle multiple tasks with a single model.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: **Strengths**
1. The analysis of each block (FFN, CA, SA) in LDMs is interesting.
**Weaknesses**
1. If the base model changes (e.g., LCM, SD3.5, Flux), the weight distribution of FFNs could differ.
Other Comments Or Suggestions: 1. It would be better to report the performance of the base model in Tab. 1.
2. Please add qualitative comparisons for text-to-image generation in the main paper. Currently, the text-to-image generation results are only presented quantitatively.
Questions For Authors: 1. The number of training samples is 118K for T2I, 281K for image editing, 23K for super-resolution, and 90K for image inpainting. Doesn’t this data imbalance affect the training results?
2. How is the training batch composed? Does each training batch consist of different tasks or the same task? And what is its effect?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review. We have answered your questions below.
**Weight distributions of FFNs in SD3.5 for image-to-image tasks:** Thank you for your comment. To the best of our knowledge, general image-to-image generation using MMDiT-based models has not been thoroughly explored. Although there is some work in image editing [1], this work was released as open source only after our submission, and its applicability to other image-to-image tasks remains unclear. To address your question, we designed a baseline approach that adds separate positional embeddings to the source and target images, concatenates them along the feature dimension, and processes the extra channels with a linear layer while fine-tuning the entire model. While this baseline produces decent results, we have not included them in our paper, as image-to-image generation in MMDiTs is a research topic in its own right.
To understand the distribution of weights within our proposed baseline, we fine-tuned SD3.5 on super-resolution (SR) and image editing (IE) tasks, then measured the distance between the fine-tuned and pre-trained models for the FFNs and various attention layers. The table below reports the mode of the distance across all layers, revealing that the FFNs deviate most from the pre- trained initialization. While we see some outlier layers in some attention layers, the deviation is consistently high in FFNs. This suggests that these layers should be split into experts for MTU.
| | FFN | attn-Q | attn-K | attn.-V | attn-Out | attn2-Q | attn2-K | attn2-V | attn2-Out |
|----|-------|---------|---------|---------|----------|---------|---------|---------|----------|
| SR | **1.032** | 8.7e-3 | 9.5e-3 | 9.6e-3 | 0.344 | 7.4e-3 | 4.5e-3 | 4.6e-3 | 0.320 |
| IE | **0.856** | 6.7e-3 | 7.6e-3 | 5.1e-3 | 0.318 | 0.01 | 8.3e-3 | 4.5e-3 | 0.320 |
Table A: Mode of the deviations between the fine-tuned and pre-trained model for SDv3.5.
**Training data and batches:** Each training batch incorporates all tasks, and we minimize the sum of all losses in each iteration. We opted against random task selection per batch to avoid increased training time. During training, we construct the dataloader using the smallest dataset (super-resolution) and then sample subsets of equal size from the other datasets, shuffling these subsets each epoch. On average, this approach maintains balance across datasets.
**Inpainting performance:** Thank you for pointing this out. Reviewer FqBN asked about our analysis of task interference, and we addressed this issue in our response to Reviewer FqBN. In summary, we analyze model performance by evaluating different combinations of tasks to understand how they interfere with one another. Our findings indicate that inpainting (IP) is less compatible with text-to-image generation (T2I) and super resolution (SR), and it is highly compatible with image editing (IE). This incompatibility may contribute to the observed performance drop in IP. We view reducing task interference as an important direction for future work and will address it in the revised version of the paper.
**Qualitative results for text-to-image generation:** We will add the results for the text-to-image generation task in the main paper. These results are currently presented in the appendix (Figure 7).
[1] FreeFlux: Understanding and Exploiting Layer-Specific Roles in RoPE-Based MMDiT for Versatile Image Editing, Wei et al.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. I have read it carefully, along with the other reviews.
Please include detailed training schemes and an analysis of task interference in the revised version.
As my concerns have been addressed, I will increase my original rating from 2 to 3.
---
Reply to Comment 1.1.1:
Comment: Thank you for increasing your score. We will add more details and task interference analysis in the revised version of the paper. | null | null | null | null | null | null |
AutoElicit: Using Large Language Models for Expert Prior Elicitation in Predictive Modelling | Accept (poster) | Summary: This paper explores the potential of LLMs to elicit prior distributions, aiming to reduce sample complexity in Bayesian inference, particularly in data-scarce healthcare domains. The authors compare two LLM-based prior elicitation methods: their proposed AutoElicit and In-Context Prior. AutoElicit directly prompts the LLM to provide estimated means and variances for Gaussian priors in text. In contrast, In-Context Prior elicits posterior predictive distributions from the LLM, from which implicit priors are inferred using maximum likelihood. The authors demonstrate that both methods can be beneficial across various tasks, with AutoElicit generally yielding superior priors. Notably, the paper shows clear improvements in inference when leveraging LLM-elicited priors compared to uninformative priors. Furthermore, the methodology facilitates the incorporation of expert knowledge expressed in natural language during the prior elicitation process.
Claims And Evidence: This paper presents commendable work, offering straightforward and effective methods. The manuscript is well-written and easy to follow. The application to healthcare domains is highly relevant, and the extensive experiments demonstrate the utility of LLM-elicited priors.
Considering the significance of the healthcare application, a deeper mechanistic understanding of the LLM's reasoning in generating these effective priors would be valuable. While space limitations might preclude a full mechanistic analysis, a discussion exploring potential explanations for the LLM's success in this context would strengthen the paper.
Methods And Evaluation Criteria: The authors tested the LLM-elicited priors (along with uninformative priors) on synthetic data and healthcare datasets. I find their evaluation and methods adequate.
Theoretical Claims: No theoretical claim was made in this paper.
Experimental Designs Or Analyses: I have reviewed all their experiments.
Supplementary Material: I have reviewed the parts where prompt examples are included.
Relation To Broader Scientific Literature: This work is also related to other work that tries to link LLM (and in-context learning) and Bayesian inference.
Essential References Not Discussed: I don't see any other essential reference that should be included.
Other Strengths And Weaknesses: None.
Other Comments Or Suggestions: None.
Questions For Authors: Have you tried eliciting priors using the iterated in-context learning proposed in Zhu & Griffiths (2024)? How would the priors elicited using this approach compare to those elicited using AutoElicit and In-Context Priors?
Ethical Review Flag: Flag this paper for an ethics review.
Ethics Expertise Needed: ['Privacy and Security', 'Responsible Research Practice (e.g., IRB, documentation, research ethics, participant consent)']
Ethical Review Concerns: The healthcare datasets presented within this paper raise potential ethical concerns and likely require independent ethical review.
++++
Details of ethic review below:
Regarding research ethics: the empirical evaluation uses one synthetic data set, four public data sets, and one (sensitive) data set collected by the authors (which they label "private"). From Section 5.1, para.4 of the paper:
"We also evaluate AutoElicit on a private dataset not in the LLM’s training, collected as part of our study on Dementia care. This contains 10 features of daily in-home activity and physiology data, and clinically validated labels of positive or negative urinary tract infection (UTI), based on the analysis of a urine sample by a healthcare professional. This dataset contains 78 people and 371 positive or negative days of UTI."
However, the authors provide no statement/evidence that appropriate IRB approvals were obtained or protocols followed. I would like the authors to briefly state whether this is the case in their response. This is in accordance with the ICML guidelines: "While we acknowledge that standards for IRB vary across borders and institutions, research involving human subjects should provide evidence that it adhered to the authors’ home institution’s procedures for obtaining IRB approval or was eligible for an exemption."
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their time and comments to improve our paper. We are pleased that they felt that this was commendable work that is well-written, easy to follow, and highly relevant for healthcare. We are glad that the methods were considered straight-forward, but effective, and that reviewer felt that the extensive experiments demonstrated the utility of our approach.
In our rebuttals, alongside our point-by-point responses, we present **three new experiments**.
### **Suggested improvements**
**Q1: A potential explanation for the LLM's success**
We believe our method’s success is due to the LLM’s ability to translate information across domains. LLMs can effectively answer questions in medical domains and probability theory. They offer the ability to translate expert information in the form of natural language into probability distributions that represent prior beliefs on model parameters. We hypothesise that resampling the LLM multiple times mitigates the effects of hallucination for LLMs that are more likely to fabricate information. By combining many Gaussian components, we can approximate an LLM’s non-Gaussian prior. For our method to be successful, the LLM needs to have an understanding of the problem domain, Gaussian distributions, and linear models. We believe that this is why we see improved performance from chain-of-thought models, which can reason about the impact of each feature on an outcome. We will include this discussion in the final version of the work.
**Q2: Further comparisons to a baseline**
Zhu & Griffiths (2024) is not directly comparable to our work, as they do not provide a framework for eliciting the parameters of a predictive model. The most comparable work, Gouk & Gao (2024) (cited), generates synthetic data using an LLM to include in the training of linear *classification* models. To compare with our method, we extend their ideas to *regression* tasks and present new experimental results.
These are presented in our response to **Reviewer 1 (xAxP) in section Q2**, along with a detailed description of the baseline. In particular, Gouk & Gao’s method is susceptible to LLM data memorisation, putting it at risk of artificially providing improved posterior results on public datasets. To better compare the methods, we have split the results by the private and public tasks. We find that our approach yields improved posterior accuracy and mean squared error over Gouk & Gao’s method.
**Q3: Add ethical approval statement**
We have already obtained the appropriate ethics for our study but wanted to meet the double-blind requirement by not providing exact information on the ethics approval, including the identification number for our study. We include this information in the final version, alongside a description of the recruitment process. In the ethics statement, we have redacted the appropriate information to prevent breaking the double-blind requirements:
> This study was performed in collaboration with INSTITUTION. Participants were recruited from the following: (1) health and social care partners within the INSTITUTION, (2) urgent and acute care services within the INSTITUTION, (3) social services who oversee sheltered and extra care sheltered housing schemes, (4) INSTITUTION and (5) specialist memory services at INSTITUTION. All participants provided written informed consent. Capacity to consent was assessed according to Good Clinical Practice, as detailed in the FRAMEWORK and the ACT. Participants were provided with a Participant Information Sheet (PIS) that includes information on how the study used their personal data collected in accordance with the ACT requirements. If the participant was deemed to lack capacity, a personal or professional consultee was sought to provide written consent to the study. Additionally, the capacity of both the participant and study partner is assessed at each research visit. Research staff conducting the assessment have completed the COURSE training and COURSE training. If a participant is deemed to lack capacity but is willing to take part in the research, a personal consultee is sought in the first instance to sign a declaration of consent. If no personal consultee can be found, a professional consultee, such as a key worker, is sought. This process is included in the study protocol, and ethical panel approval is obtained.
We feel the use of privately collected data from a real-world healthcare problem provides substantial evidence for the effectiveness of our framework, and we are indebted to our study participants.
**Thank you for taking the time to review our work. We will implement the suggested improvements to the final manuscript. If we have answered your questions, then we would appreciate you considering raising your score. If anything is still unclear, we are happy to discuss further.**
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their thoughtful responses. I will keep my score unchanged, as it already reflects strong support for accepting this paper.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their response and time. Their comments have helped to improve our work. | Summary: This work proposes a framework to elicit prior distributions over parameters from LLMs, and use these in connection with linear models, with the goal to achieve transparent and interpretable, yet performant models.
Claims And Evidence: The paper is well written and relatively easy to follow.
Methods And Evaluation Criteria: The application settings range from synthetic to real-world clinical (private) dataset, which I very much appreciate. This renders the experimental setup interesting and relevant.
Theoretical Claims: Major weakness:
My main point of criticism is on the loss of *transparency/interpretability* of the proposed approach: *Context*: The auto-elicited prior is constructed as a Mixture of Gaussians (see Eq. 2). In the experiments, the authors choose K=100 components, and in App. 12 the authors show that this very complex prior is necessary to achieve strong predictive performance. *Criticism* Is it desirable to construct such complex prior distributions with LLMs and then apply them to linear models with the goal of maintaining interpretability? The goal of using a linear model is two-fold: 1) the functional relationship is simple and ‘interpretable’ (this is maintained with the proposed approach) 2) the weights are fitted by a simple least-squares error procedure (or a similarly simplistic and transparent procedure here in the Bayesian case). If the prior now is extremely complicated, both because it contains knowledge of the LLM and it is a 100-component MoG, we lose all transparency of the predictive weights and point 2) is lost/’violated’. Because of this, I cannot see a clinician trusting the model as an “interpretable”/simple model anymore. We have---through the rules of Bayesian Inference---made our linear model a linear model augmented with an LLM. If we wanted to improve the performance of the a (linear) model, we could use an LLM directly, or use another model. -- Am I missing something in terms of interpretability? I think it’s important to discuss this point in the rebuttal; at this point, it drastically diminishes the significance of the work for me. -- Also, I would like to note that the general procedure of “eliciting priors from LLMs” remains interesting (though has been done in related work), it is only the use for predictive purposes in the context of trying to maintain the interpretability of linear models that I find challenging.
I would like to discuss this point during the author rebuttal and will revisit my decision based on this discussion.
Experimental Designs Or Analyses: The experiments are strong, I particularly like the idea of including expert information in addition. The figures are extensive, informative and clear.
The basic setup and notation are simple, yet somewhat hard to follow. An example explaining important concepts such as task description $I_k$ (e.g. App. 7.3) clearer.
Supplementary Material: I consulted selected parts of the supplementary material, and refer to those in the other responses.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: In Sec. 5.6 the authors study the question whether in-context learning in LLMs is Bayesian. The authors also mention this as one of the three contributions in Sec. 1. However, this question has been studied already. Some of these papers you have cited in App. 2.2. (though not in the main text, why?). It may be worth adding “Are Large Language Models Bayesian? A Martingale Perspective on In-Context Learning” (ICML 2024).
Other Strengths And Weaknesses: - The appendix is detailed, and very helpful in augmenting the main text.
Other Comments Or Suggestions: N/A
Questions For Authors: - Have you tried other uninformative priors than N(0,I), even a higher variance for instance? Given you use a K=100 MoG, a natural idea might be to try a MoG prior possibly with randomised parameters. -- What I would be interested in disentangling is the question whether the complexity of the prior (MoG vs. single Gaussian in the uninformative case) or the “correctness” of the LLM prior is crucial for performance.
- Can you state how you compute predictive performance? What’s the predictive distribution you use / can you point me to the equation?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their feedback and time. We are happy that the work is considered well-written, with strong experiments, clear and informative figures, and a detailed appendix. We are pleased that the evaluation on a private dataset is appreciated and provides strong motivation for the work and that the inclusion of expert information is interesting.
In our rebuttals, alongside our point-by-point responses, we present **three new experiments**.
### **Suggested improvements**
**Q1: Discussion of related works in LLMs from a Bayesian perspective**
As you note, numerous works discuss in-context learning from a Bayesian perspective. Our work is unique in that we empirically estimate the internal prior and posterior distributions of an in-context learner. With this, we use MCMC methods to compare the in-context posterior with a separately calculated posterior. This allows us to question whether an LLM is performing Bayesian inference on *linear modelling of real-world numerical tasks*. This is more relevant to our work than the previous natural language or theoretical perspectives. This contribution also provides a complete decision criterion to determine whether in-context learning or AutoElicit is more performant for a task.
We agree that this is an important area of the literature. We were restricted by the page length requirements of the submission and will include a complete description in the main section of the final paper with your suggestions.
**Q2: Discussion on the interpretability of the priors**
We use 100 mixture components in our prior to mitigate the effects of LLM hallucination, which is important in healthcare tasks. The prior distribution can be understood by visualising histograms of the parameter samples from the final mixture (linked below in a modified Figure 15):
**FIGURE:** https://imgur.com/a/gRRVmUz
As the density of these distributions are used for the prior directly, this provides a complete picture. For linear models, each feature's prior corresponds to its direction and importance when predicting the target. On the synthetic task (row 1), the prior distributions concentrate on the parameters that we expect. For the UTI task (row 2), the LLM provided prior distributions that suggest Feature 2 was negatively predictive of a positive UTI, whilst Feature 10 is a strong positive predictor. For Breast Cancer (row 6), the LLM produced two modes in its Feature 2 prior. Here, the LLM was either unsure of its belief, and so the elicited prior changed between task descriptions, or the LLM was unsure of the feature’s direction of correlation *but guessed that the feature was important*.
A chain-of-thought LLM like Deepseek-R1 could help us to better understand the elicitation process. We also found that using LLMs for predictions directly (in-context learning) underperforms in comparison to our approach.
**Q3: Discussion on the number of mixture components**
We investigate, in **Rebbutal 2 for Reviewer 2 (3Pnd), in section Q3**, how the posterior performance depends on the number of prior components. Here, we significantly reduce the number of mixture components whilst retaining the posterior accuracy and mean squared error.
**Q4: *New experiment* evaluating an uninformative mixture of random Gaussians**
Based on your suggestion, we compare our results to a new uninformative baseline. We sample 100 values from a standard normal distribution ($\mu \sim N(\mu | 0,1)$), and we use these as means of 100 Gaussian distributions ($\theta \sim N(\theta | \mu, 1)$). With these distributions we construct a mixture prior. As we keep the standard deviation at 1 for each component, the resulting prior has a greater range in the parameter space. The figure linked below shows the results:
**FIGURE:** https://imgur.com/a/STNjyEG
This new uninformative prior leads to no noticeable difference in the results for the synthetic task, Diabetes task, and Breast Cancer task. However, on the UTI task, we see marginally better accuracy than the previous uninformative prior, and on the Heart Disease and Hypothyroid tasks, we see greater improvements. Despite this, DeepSeek-R1-32B-Q4 or GPT-3.5-turbo continue to provide more informative priors. Therefore, the posterior improvement is not due to the complexity of the prior.
**Q5: Clarity on the predictive performance**
Predictive performance refers to either the posterior accuracy or mean squared error. To calculate this, we sample 5 chains of 5000 posterior parameter samples using MCMC methods. We use these sampled posterior parameters to predict the labels on our test set and calculate the metrics. This provides a distribution of performance.
**Thank you for taking the time to review our work. We will implement the suggested improvements to the final manuscript. If we have answered your questions, then we would appreciate you considering raising your score. If anything is still unclear, we are happy to discuss further.** | Summary: Linear predictive models remain valuable for applied researchers. However, principled Bayesian approaches to such models require the specification of some prior. What prior should researchers use for different parameters? Good priors may be informed by expert information, but such expert knowledge can be hard to come by. The authors explore the possibility of using language models to elicit such priors. They develop a new method, AutoElicit, to automatically extract priors from a range of LLMs and compare against direct prediction and ICL elicitation from models. Generally, they find that the priors elicited from their method yield better downstream predictive performance on a range of medical tasks (and one synthetic predictive task).
Claims And Evidence: The paper is very well-written and thorough. I particularly appreciate the authors' extensive Appendix. They clearly ran a wide range of experiments that probe the characteristics of AutoElicit against ICL. I particularly appreciate the authors' exploration of other LLMs --- their work raises interesting questions about which model to use, when, based on costs of elicitation --- as well as their analysis of the potential impact of memorization on results.
What I was less clear about, though, is:
- (1) Where AutoElicit stands in relation to other prior elicitation methods in terms of performance
- (2) The value / ease of incorporating expert information
- (3) The role of the diversity of task descriptions
On (1), while I admire the authors' extensive literature review, I was a bit disappointed to not see direct head-to-head comparisons with some alternate elicitation methods. For instance, the authors mention Zhu and Griffiths --- how does such a method compare on something like the UTI dataset? It would be good to be clearer on why alternate methods were not considered in the empirical section?
For (2), the authors claim that their method is a way to also incorporate expert knowledge into prior elicitation. However, the UTI example they gave -- to my sense? -- had more "commonsense medical knowledge" that was added (e.g., UTIs lead to more urination at night). I think it makes sense that GPT-3.5 wasn't really impacted by this; I wouldn't say that's real "expert" knowledge per say? Can the authors sway parameters even more with genuine, nice expert knowledge? This need not be in this current submission in my opinion, but is something I walked away curious about and did not feel the current paper really demonstrated.
And on (3), I was quite confused on the task description resampling. Based on the Appendix, the number seems to make quite a big difference! But what is the character of these different descriptions? Can you add some examples to the Appendix? How different are they really? Or is the "boost" from just resampling?
Methods And Evaluation Criteria: Generally I think the evaluation was sound, however, as noted above, I do not completely understand why the number of task descriptions has such a big role (Fig 14). Is this really from the task descriptions, or something else? I see the authors set the temperature to 0.1? Why generally so low, if part of the role is to get diversity in parameters?
I also came away with some confusion over whether the priors were semantically sensible. For instance, in Fig 15, some parameters' priors shifted around quite a bit -- others not. Can the authors add more qualitative analyses into what changed and whether it is semantically-relevant to the task at hand?
Theoretical Claims: I believe the theory is appropriate but am not 100% confident in my assessment.
Experimental Designs Or Analyses: As noted above, I think the authors did a very thorough job in a range of experiments -- however, I am particularly curious about the role of the task descriptions versus potentially
Supplementary Material: I read the entire Appendix.
Relation To Broader Scientific Literature: I think the authors did a generally good job of situating their work in relation to other literature. I think the experiments around "do LLMs do Bayesian inference" though did not provide much substantial value ontop of what's already been looked at in the literature. I'd encourage the authors to focus principally on the role of prior elicitation, which seems to be the more novel direction here.
I would have liked to see though more empirical comparison to at least one alternate method for prior elicitation from LLMs. How does AutoElicit compare?
Essential References Not Discussed: I would encourage the authors to look at:
Large Language Models Are Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning https://proceedings.neurips.cc/paper_files/paper/2023/hash/3255a7554605a88800f4e120b3a929e1-Abstract-Conference.html -- looks at LLMs and ICL from a Bayesian lense
Automated Statistical Model Discovery with Language Models https://arxiv.org/abs/2402.17879 which also looked at the use of LLMs in more classical statistical modeling
MPP: Language Models as Probabilistic Priors for Perception and Action https://arxiv.org/pdf/2302.02801 --- an early work using LLMs as priors (though not for linear modeling problems as these authors consider).
Other Strengths And Weaknesses: I greatly admire the authors' effort to make the code easily runnable (A.1) and their detailed description of how to run the method (A.7.6). I think this will be valuable to the broader community. Well done!
For future (not here), I would be keen to see how actual experiments -- e.g., doctors from the UTI study? -- interpret the parameters and assess the value of AutoElicit.
Other Comments Or Suggestions: Fig 17 says "OpenAI models" but includes DeepSeek?
Questions For Authors: I raised my questions above --- particularly around the task descriptions and some deeper parameter analyses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's kind words and time. We are pleased that our work was considered well-written with thorough experiments and appendices. We are happy that the extensive literature review, discussion of elicitation costs, and LLM memorisation experiments were appreciated.
In our rebuttals, alongside our point-by-point responses, we present **three new experiments**.
### **Suggested improvements**
**Q1: Further comparisons to a baseline**
Zhu & Griffiths (2024) is not directly comparable to our work, as they do not provide a framework for eliciting the parameters of a predictive model. The most comparable work, Gouk & Gao (2024) (cited), generates synthetic data using an LLM to include in the training of linear *classification* models. To compare with our method, we extend their ideas to *regression* tasks and present new experimental results.
These are presented in our response to **Reviewer 1 (xAxP) in section Q2**, along with a detailed description of the baseline. In particular, Gouk & Gao’s method is susceptible to LLM data memorisation, putting it at risk of artificially providing improved posterior results on public datasets. To better compare the methods, we have split the results by the private and public tasks. We find that our approach yields improved posterior accuracy and mean squared error over Gouk & Gao’s method.
**Q2: Using more challenging expert information**
This is an interesting point. A synthetic example of an LLM using completely new expert information is given in the last paragraph of Appendix A.11 and Figure 13 (linked below).
**FIGURE:** https://imgur.com/a/QSZl2ZO
This demonstrates that giving increasingly more detailed information about the task (provided as natural language, Appendix A.11) allowed the LLM to update the elicited distributions correctly. We will lengthen the discussion in Section 5.3 and Appendix A.11 of the final paper to include this point.
**Q3: *New experiment* on the variation of the task descriptions and number of mixture components**
We will include examples of rephrased task descriptions in the appendix. We used a temperature of 0.1 to elicit the prior distributions so that the LLM reliably generated parameters of a Gaussian distribution and to mitigate the effects of hallucination, as we focus on healthcare tasks. When rephrasing the task descriptions, we used a higher temperature of 1.0 to get diverse descriptions. This is why we see variations in the priors elicited. An example rephrased system role is:
> You are a simulator of a logistic regression predictive model for … Here the inputs are values from sensors around a home and the output is the probability of the presence of a urinary tract infection. Specifically, the targets are …
To:
> You function as a logistic regression prediction model for … Inputs include sensor readings from a home, and the output is the probability of a urinary tract infection. The targets are …
The meaning of the description remains the same, but the rephrasing mitigates against changes in the priors elicited based only on the language used in the prompt.
To supplement this, we calculate the posterior performance with varied numbers of mixture components in our prior (linked below):
**FIGURE:** https://imgur.com/a/E3hE9Na
We see little variation (except for Breast Cancer with Deepseek) in the accuracy or mean squared error as we increase the number of components in the prior. This suggests our framework is robust to the number of task descriptions. We hypothesise that a greater number of mixture components makes the framework more resilient to LLMs that frequently hallucinate, especially on tasks with large numbers of features (such as Breast Cancer).
In Appendix A.12, we showed that the density of the priors changed with the number of mixture components, however, this has small effects on the posterior performance. This is likely because the sample space that provides good posterior performance is covered by the majority of elicited components, whilst the large changes in Appendix A.12 are due to more unique components.
**Q4: Discussion on the interpretability of priors**
In our response to **Reviewer 3 (fJAR) in section Q2**, we provide examples of interpreting the priors elicited from GPT-3.5-turbo. As suggested, it would be interesting to explore how these priors are understood by clinicians and if they are used separately from the predictive modelling. The points you have raised are valuable. This is future work we are currently organising with clinicians.
**Related works**
Thank you for the helpful suggestions. We will include these references with discussions in the final version.
**Thank you for taking the time to review our work. We will implement the suggested improvements to the final manuscript. If we have answered your questions, then we would appreciate you considering raising your score. If anything is still unclear, we are happy to discuss further.** | Summary: The paper considers a prior elicitation method based on query large language models, rather than human experts, for the purposes of fitting Bayesian linear models. They make a distinction between the explicitly elicited priors supplied by their method, and the implicit priors used by the LLM when doing in-context learning. They have a number of insights related to how LLM do not exhibit consistent Bayesian reasoning, and the how expliticly elicited priors can lead to improved sample efficiency outperform in-context learning.
Claims And Evidence: The paper makes three main claims:
* The introduction of a new algorithm for eliciting priors over linear models from LLMs, which leads to better loss values---particularly with small training sets. This is substantiated by experimental results. However, I would note that the algorithm they present is essentially the same as the one proposed in Selby et al. (2024), but applied to a different set of modelling problems.
* When comparing with in-context learning, it is found that LLMs inconsistently approximate Bayesian inference, and that their implicit priors are less informative than those that are explicitly elicited.
* Bayes factors are claimed to be a useful tool for comparing ICL and linear models with LLM-based priors. This is validated experimentally.
Methods And Evaluation Criteria: The method and evaluation criteria mostly make sense: using LLMs as a generic knowledge base has ample motivation, and model selection based on Bayes factors is well-established.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The main issue I can see with the experimental evaluation is the lack of comparison with the very related method of Gouk & Gao (2024), who also consider eliciting priors for linear models from LLMs. Other than that, the experiments are quite well though out. The use of methods to determine leakage of datasets in the LLM pre-training, and also a private dataset, is a particularly positive aspect of the setup.
Supplementary Material: I did not review the supplemental material; I read the main paper, and some appendices that were referenced in the main paper.
Relation To Broader Scientific Literature: Compared to the two most related previous pieces of work in this area (Gouk & Gao (2024) and Selby et al. (2024)), this work provides more analysis of how the explicitly elicited priors compare to the implicit priors used during in-context learning.
One aspect that I do not think is sufficiently highlighted in the paper is that the proposed method is essentially the same as Selby et al. (2024). The difference between this previous work and the current paper lies only in the types of tasks used during evaluation---Selby et al. consider imputation and data generation problems, rather than linear modelling problems.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their comments and constructive feedback to improve our work. We are happy the work was found to be well-written, easy to follow, and well motivated. We are pleased that the use of strong experiments, evaluation of LLM memorisation, and use of our privately collected clinical dataset were appreciated.
In our rebuttals, alongside our point-by-point responses, we present **three new experiments**.
### **Suggested improvements**
**Q1: Methodological comparison to Selby et al. (2024)**
Thank you for your question. We list key differences between our work and Selby et al. (2024) (cited), highlighting where the novelty of our approach lies. We will improve our description of this work in the final version of the manuscript:
- Selby et al.’s core focus is on data imputation; however, they show preliminary results for eliciting distributions and compare these to human experts.
- They use these in a single predictive task. However, their method elicits a prior *over the targets* directly, much like with in-context learning, and not *over the parameters* of a model as in our work. This means that Selby et al. do not update a model using available data and are therefore not training any predictive models using the priors. This also makes Selby et al.’s work incompatible as a baseline.
- Selby et al. do not perform repeated sampling of the LLM, which we hypothesise mitigates hallucination.
- Selby et al. do not combine experts and LLMs, an important aspect of human-AI interaction. We achieve this by allowing experts to provide natural language information in task descriptions, simplifying current prior elicitation methods. In this case, our elicited distributions contain knowledge from both the experts and the LLM’s training.
- We also present a grounded model selection criterion to decide between AutoElicit and in-context learning.
**Q2: *New experiment* with Gouk & Gao (2024) as a baseline**
A more compatible baseline presented by Gouk & Gao (2024) (cited) generates synthetic data using an LLM to provide an additional likelihood term when training linear *classification* models. This method uses an LLM to generate feature values before labelling them with zero-shot in-context learning. These generated samples are provided alongside real data during the training of a linear predictive model. Therefore, Gouk & Gao’s method is susceptible to LLM data memorisation, putting it at risk of artificially providing improved posterior results on public datasets.
We compare our results to this baseline and extend their ideas to *regression* tasks to compare with all the datasets we test. This previous work was described in our manuscript but we now provide new experimental results. The figure (linked below) will be included in the final manuscript:
**FIGURE:** https://imgur.com/a/3sM2baQ
This figure is split by publicly available tasks (four right-most plots) and privately available tasks (two left-most plots). In these experiments, our approach provides significantly greater posterior accuracy and lower posterior mean squared error for both private datasets.
In the public tasks, we see that the performance of Gouk & Gao’s approach varies considerably by LLM and task. It underperforms compared to the *uninformative* prior on five occasions. On the two occasions it outperforms AutoElicit (ours), it achieves surprisingly good accuracy or mean squared error (Heart Disease with GPT-4-turbo and Diabetes with GPT-3.5-turbo), suggesting that it might be reproducing parts of the public datasets rather than generating new samples. Given our analysis in Appendix A.8 showing the memorisation ability of LLMs, and Gouk and Gao’s performance on private data, we suspect the performance of this approach is artificially improved by reproducing the true dataset. Since for AutoElicit, we prompt the language model for a prior distribution over the parameters of a predictive model, using only the feature names and a description of the dataset, the memorisation of data points is unlikely to have had an impact on our method's results (discussed further in Appendix A.8.1).
In both of the private datasets (two left-most plots), where there is no risk of LLM memorisation, AutoElicit results in improved posterior accuracy and mean squared error. On the synthetic task, our improvement is orders of magnitudes better, and on the UTI task, our method significantly increases accuracy at all training sizes. In both cases, the approach proposed by Gouk & Gao underperforms compared to an *uninformative* prior.
We will include these new results in the final version of the paper, as it provides further context to our approach.
**Thank you for taking the time to review our work. We will implement the suggested improvements to the final manuscript. If we have answered your questions, then we would appreciate you considering raising your score. If anything is still unclear, we are happy to discuss further.** | null | null | null | null | null | null |
Model-Based Exploration in Monitored Markov Decision Processes | Accept (poster) | Summary: This paper considers model-based reinforcement learning under partially observable rewards. In classical reinforcement learning, the agent always receives a reward upon executing an action in the current state of the underlying MDP. However, in practical scenarios, such as when a human provides intermittent rewards, this assumption may not hold.
The work builds on the recent model of Monitored MDPs, where the agent interacts with both the environment and a monitor (another MDP). The monitor has its own state and action spaces, both observable to the agent. The agent operates in the joint state space, executing joint actions and receiving up to two rewards (monitor and environment), both through the monitor. The crucial difference is that the monitor may hide environment rewards from the agent, though observed rewards are assumed to be truthful.
Due to partial observability of rewards, globally optimal behavior may be unattainable when some rewards are always hidden (unsolvable Mon-MDPs). Thus, the goal is to derive a minimax-optimal policy, ensuring robustness under worst-case assumptions on unobservable rewards.
To address this, the authors extend the MBIE-EB algorithm to Mon-MDPs, the primary challenge being to balance exploration-driven optimism with safety-driven pessimism. They adapt the exploration bonus of MBIE-EB by distinguishing previously observed environment rewards and leveraging KL-UCB to estimate upper confidence bounds on the probability of reward observability. This fosters initial optimism for exploration, which for unobservable rewards shifts to pessimism in the long-run.
The key theoretical result is an MBIE-EB-like sample complexity guarantee for the algorithm to arrive at a near minimax optimal policy, which they prove to be polynomial in parameters corresponding to confidence and proximity to the minimax-optimal expected reward. This is subsequently evaluated experimentally on a range of Mon-MDP environments, primarily comparing the new MBIE-EB-based approach to existing techniques for Mon-MDPs.
Claims And Evidence: The claims of the paper are well supported. Theoretical claims build on established results from the literature, with proofs given for novel contributions. Practical claims are backed by the experimental evaluation.
Methods And Evaluation Criteria: The considered model class of Mon-MDPs is very recent, therefore, the range of existing benchmark environments is understandably limited. The paper evaluates the proposed approach on a small established benchmark and several seemingly novel synthetic benchmarks. Their choice is reasonable and they cover a range of interesting, though small-scale setups. Since the algorithm is model- and counting-based, all environments are grid worlds.
Theoretical Claims: The main theoretical claim builds on established results from the literature. I reviewed the corresponding proofs in the appendix and found no inaccuracies or gaps.
Experimental Designs Or Analyses: The benchmark environments, hyperparameters, and experimental setups are well documented and reasonable. The results also appear consistent and well-founded.
Supplementary Material: The authors provide an appendix with a full description of all benchmark environments, monitor types, used hyperparameters, the full experimental results, and detailed proofs for the theoretical claims. These were reviewed as described above. The paper states that the corresponding source code will be made publicly available upon publication.
Relation To Broader Scientific Literature: The paper builds and extends on the recent literature on Mon-MDPs introduced by [Parisi et al., 2024b] ( AAMAS 2024), and partially observable rewards [Parisi et al., 2024a] (NeurIPS 2024). The authors extend the MBIE-EB algorithm from [Strehl and Littman, 2008] to finite-state Mon-MDPs, also leveraging upper confidence bounds as per [Garivier and Cappé, 2011] for the estimation of the chance to observe rewards. In combination, this results in a model-based RL algorithm for Mon-MDPs which they compare to the Directed-E^2 algorithm from [Parisi et al., 2024a].
Essential References Not Discussed: None that I'm aware of.
Other Strengths And Weaknesses: # Strengths
The paper is overall very well written and easy to follow. The authors do a good job in keeping the paper self-contained and motivating their contributions. The proposed algorithm is simple and elegant, yet effective as both formally proven in their main theoretical results and practically validated in the experimental evaluation, primarily comparing it to the existing Directed-E^2 algorithm for Mon-MDPs.
The theoretical results are compelling, extending MBIE-EB-like guarantees from standard MDPs to the more complex model of Mon-MDPs. The evaluation demonstrates a clear advantage of the new algorithm over existing methods, especially in unsolvable Mon-MDPs where some rewards are never observable, for which, e.g., the Directed-E^2 algorithm seems to struggle.
# Weaknesses
I am wondering about the practical applicability of the Mon-MDP setup. The authors motivate the model with, for example, a human in the loop who provides rewards but may not always be present, thus making some rewards unobservable. While this makes sense conceptually, it also seems like a fairly coarse-grained approach in which all rewards are either observable or not during certain time periods, not emitting much more exploitable structure.
At the same time, the model class assumes full observability of the monitor’s state space by the agent, and it’s not entirely clear how this would extend to, for example, a human-in-the-loop setting or other complex scenarios beyond a simple present/not-present setup. The monitors the authors consider are mainly random, small “button-like” states (on/off), or cases where the agent must determine which part of the monitor to consult. I’m wondering to what extent this assumption of full observability restricts potential applications of this theoretically compelling framework, and whether it might be relaxed into a partially observable state and reward setting (like Mon-POMDPs?). However, I also recognize that this may not be a shortcoming of the current paper, since it does not introduce the model class itself but proposes a new algorithm that improves upon existing ones.
The experimental evaluation is broad and covers a range of grid-worlds with diverse features and complications, all tested with different monitor designs. However, these environments are relatively small, with the largest being a 6x6 2D grid-world. While smaller setups are typical in model-based approaches, it would be interesting to see how this method scales to larger or higher-dimensional domains.
Other Comments Or Suggestions: - Line 126: Footnote 1 should probably be Footnote 3.
- Line 174: r_max should probably be r_min.
- Line 170: The definition of the equivalence class [M]_I for Mon-MDPs under reward observability is not quite self-contained here and should probably include a little clarification on state and action spaces being fixed as per [Parisi et al., 2024b] . The description “the set of all Mon-MDPs that the agent cannot distinguish” confused me for a second, since this may also include much larger but, e.g., bisimilar Mon-MDPs, also rendering the set not a singleton for solvable Mon-MDPs.
Questions For Authors: 1. To what extent does the assumption of a fully observable monitor state space restrict the applicability of this approach in real-world scenarios? Are there practical examples where this assumption is clearly justified?
2. Would an extension of this to partially observable states (i.e., unobservable monitor states) be possible (like Mon-POMDPs)?
3. How does this approach scale to larger environments in higher dimensions? What are the limiting factors, e.g., compared to plain MBIE-EB?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank Reviewer TsqN for providing comprehensive and in-depth feedback! We are glad you found the paper well-written and easy to follow. Here, we try to address the points mentioned in order:
> I’m wondering to what extent this assumption of full observability restricts potential applications of this theoretically compelling framework, and whether it might be relaxed into a partially observable state and reward setting (like Mon-POMDPs?)
As you have mentioned, the introduction of the Mon-MDPs is not a contribution of our work, but we find it useful to share our thoughts around your insightful scrutiny around the applicability of Mon-MDPs.
We agree that grid-world domains do not represent the real-world applicability of most algorithms. Our choice to run experiments on these domains is because the Mon-MDP framework is relatively new and controlled small-scale experiments help shed light on the understanding of various aspects of this framework. Following the original Mon-MDP paper [Parisi 2024b], we have assumed that the agent receives a joint state from the environment and the monitor. Since these are both MDPs, they could likely be modeled by POMDPs instead. In such a (novel) framework, we imagine the agent would receive the joint observation and techniques from the POMDP literature could be adapted in future work.
> The definition of the equivalence class [M]_I for Mon-MDPs under reward observability is not quite self-contained here and should probably include a little clarification on state and action spaces being fixed as per [Parisi et al., 2024b] . The description “the set of all Mon-MDPs that the agent cannot distinguish” confused me for a second, since this may also include much larger but, e.g., bisimilar Mon-MDPs, also rendering the set not a singleton for solvable Mon-MDPs.
As you have pointed out, the definition is given by [Parisi et al 2024b]. But we revisit it in the appendix of the final version and further clarify the concept.
>To what extent does the assumption of a fully observable monitor state space restrict the applicability of this approach in real-world scenarios? Are there practical examples where this assumption is clearly justified?
If the state of the monitor and the environment are fully observable to the agent, then Mon-MDPs would suffice. We feel that many monitored settings match the fully observable criteria. Often the RL practitioner is instrumenting the system that provides rewards, and so the full state is more easily known and possibly provided to the learning agent (e.g., the status of sensors that measure reward, or their locations in the world and under what conditions they provide signals). Human monitors, though, might be a good example that would not fit the fully observable setting as they typically would have internal state that could not be provided to the agent.
> Would an extension of this to partially observable states (i.e., unobservable monitor states) be possible (like Mon-POMDPs)?
Likely - please see our answer above.
> How does this approach scale to larger environments in higher dimensions? What are the limiting factors, e.g., compared to plain MBIE-EB?
The biggest limiting factor is the direct dependency on the counts of visiting state-action pairs or observing the environment rewards (which applies to other tabular methods, including MBIE-EB). As pointed out in the paper in Discussion section Iine 416, using pseudocounts could ameliorate this shortcoming to some extent, which have been applied to improve exploration with Deep RL methods [Bellemare et al., 2016; Tang et al., 2017]. The other limitation (distinct from MBIE-EB) is the computation time of the KL_UCB term, which does not have a closed-form. However, this term can be replaced with a typical $\frac{\alpha}{\sqrt{N(s, a)}}, \alpha > 0$ bonus. This new bonus again can be extended by pseudocounts.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for comprehensively addressing my questions and concerns. Having read their response and the other reviews, I find that the presented work is a nice contribution to the so far rather niche, but as other reviewers also point out, important field of Mon-MDPs, which might offer promising potential for future developments in RL under partially observable rewards. I think the authors should make the motivation and concrete examples of applications that fit into the framework of the current work more prominent and clear in the paper, as they did in their response. In particular, it would be helpful to also highlight what is currently not covered by the present framework, such as human-in-the-loop setups with unobservable internal state of the monitor. I believe that the direction of partially observable monitor states would be a very interesting avenue for future work in this line of research.
Since the paper is very well written and comprehensive and advances the field of Mon-MDPs and partially observable rewards, I decided to raise my score and recommend acceptance, although I strongly encourage the authors to address the comments provided in the reviews. | Summary: The current paper consider the task of bounding the sample complexity of exploration in monitored MDP where the reward of the environment is observed only for some of the state action pairs and where the minimum non zero probability of observing an environment reward is lower bounded by $\rho$.
Interesting the authors bound the sample complexity of exploration aslo for Non solvable Mon-MDP using as comparator not the optimal policy but the minmax policy which is optimal in the wrost case MDP. That is, the one that has minimum reward in the state action pairs for which the environment never reveals the reward function.
### After rebuttal
My opinion remains positive.
I think that the lower bound will improve the paper. I hope you can really include it in the final version.
Claims And Evidence: Yes, there is clear and convincing evidence.
Methods And Evaluation Criteria: Yes, the proposed experiments and theory make sense for the considered problem.
An additional theoretical result that the author could try to obtain is the regret from the initial distribution ( allowing the agent to reset from it).
In this way the author could argue that they can recover a policy almost as good as the minmax policy in solving the Mon-MDP. The current guarantees only say that the recovered policy is as good as the minmax one for states that are visited by the learner after a long enough number of steps.
I think that this is an interesting future direction that the authors could pursue.
Theoretical Claims: The proofs seem correct. I had a quick look at all of them.
Experimental Designs Or Analyses: The experiments are complete. They are limited to tabular tasks but it makes sense to run experiment exactly in the same setting for which the theory applies.
Supplementary Material: There is no supplementary material
Relation To Broader Scientific Literature: I think that the related literature is well covered.
However, I think that the author should motivate the metric that they bound in their theoretical guarantees a bit better.
In particular, they should say that this quantity is known as sample complexity of exploration and it was first studied in Kakade's PhD thesis.
https://homes.cs.washington.edu/~sham/papers/thesis/sham_thesis.pdf Theorem 8.3.2
Also there are several follow up works looking at the same metric
https://proceedings.mlr.press/v19/szita11a/szita11a.pdf Definition 5.1
Lattimore & Hutter PAC Bounds for Discounted MDPs
I think the authors should add a discussion motivating this metric and referring to these previous works.
Essential References Not Discussed: See above
Other Strengths And Weaknesses: The algorithmic derivation is very well explained. I particularly liked the steps explaining why plain UCB approaches would fail for excessive optimism and why assigning $r_\min$ to unobserved state action pairs would fail due to excessive pessimism.
Other Comments Or Suggestions: I think that the author should add a pseudocode of the algorithm.
Questions For Authors: Have you tried to prove a lowe rbound to show that the dependence on $\rho^{-1}$ is necessary ?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank Reviewer gjFL for providing valuable feedback! We are glad you found the algorithmic derivation very well explained. Here, we try to address the points mentioned in order:
> An additional theoretical result that the author could try to obtain is the regret from the initial distribution ( allowing the agent to reset from it). In this way the author could argue that they can recover a policy almost as good as the minmax policy in solving the Mon-MDP. The current guarantees only say that the recovered policy is as good as the minmax one for states that are visited by the learner after a long enough number of steps. I think that this is an interesting future direction that the authors could pursue.
We thank the author for the suggestion! We agree that would be an interesting future direction.
> I think the authors should add a discussion motivating this metric and referring to these previous works.
We agree. We’ll cite the suggested works and elaborate more on the motivation behind sample complexity in the final version of the paper.
> I think that the author should add a pseudocode of the algorithm.
We’ll add pseudocode to the appendix in the final version of the paper.
> Have you tried to prove a lower bound to show that the dependence on $\rho^{-1}$ is necessary?
We felt the dependence was indeed necessary, but hadn’t thought about making the argument formal until your question. One can construct hard examples from simple single-state Mon-MDPs (embodying a bandit problem), where arm rewards are only observable with probability $\rho$. Intuitively, since it will take $O(1/\rho)$ pulls to observe any arm’s reward, this should give exactly this lower bound. We will aim to make this formal (building off of Mannor and Tsitsiklis’s (2004) bandit PAC-bounds) in the final version.
---
Rebuttal Comment 1.1:
Comment: Thanks for your answer!
I think that the lower bound will improve the paper. I hope you can really include it in the final version.
Best,
reviewer
---
Reply to Comment 1.1.1:
Comment: Building off of Mannor and Tsitsiklis’s (2004), now we have done the derivation and shown the necessary existence of $1 / \rho$. We will add this finding to the final version. | Summary: This work extends model-based interval estimation with exploration bonus (MBIE-EB), a well-known model-based exploration algorithm for MDPs with PAC guarantees, to the monitored MDP (Mon-MDP).
The monitored MDP relaxes the assumption that the reward is observed at every time step and, instead, let this be determined by a monitor.
This monitor can be, for example, a human giving rewards to the agent (but only if they are present) or depend on an unreliable / expensive sensor.
This monitor itself is a state-full (Markovian) entity function that observes the true environment and provides a "monitor" reward and an environment reward.
The goal is to optimize for the true environment return plus the monitor reward.
MBIE-EB works on constructing optimistic MDPs to ensure the problem is fully explored, based on counting visits.
The extension proposed here has two parts.
First, the optimistic counting is also applied to the transition and reward function of the monitor.
Second, to good worst-case performance when rewards of certain state-action pairs are never observable, the agent becomes pessimistic after a certain horizon (hyper parameter).
This method is compared to directed exploration-exploitation (Directed-E2) on several benchmarks reported in Directed-E2's paper.
There are no other baselines, which is understandable given the lack of SOTA due to the recency of Mon-MDP's definition, but the experimental section is insightful and convincing.
Lastly, the method comes with sample complexity, which seems to be based on MBIE-EB's original proof (techniques), but contains some novel components.
## update after rebuttal
The rebuttal period has not changed my mind regarding the (positive) score.
Claims And Evidence: = Worst-case performance
The method is claimed to learn policies for worst-case Mon-MDP scenarios (of never observing rewards in some state-action pairs).
This is well tackled both intuitively when introducing the method, as well as in empirical studies on convincing domains.
= Sample complexity guarantees.
It is claimed to have the first sample-complexity guarantees and the proof _looks_ good.
Methods And Evaluation Criteria: See summary, the method is evaluated on Mon-MDP benchmarks where they show their proposed method reliably outperforms the SOTA (for which the domains were designed).
It would have been nice to see some non Mon-MDP algorithms performance, especially on MDPs that are less obviously designed with these Mon-MDP algorithms in mind.
Similarly, some ablation studies on parameter robustness and different versions of the proposed method would have been insightful and helpful.
Theoretical Claims: The method is shown to converge within finite samples, depending on parameters inherited form MBIE-EB and the minimum non-zero probability that a reward is seen `p`.
In particular, the sample complexity is `O(... 1/p)`.
Experimental Designs Or Analyses: The usual experiments on performance over time, which is fine.
Supplementary Material: The appendix includes more detailed description of experiments and the derivation of the proof.
I am not familiar with sample complexity proves, so I am unable to confirm all the steps are correct.
Relation To Broader Scientific Literature: This work is specifically targeting Mon-MDPs, which is a rather niche (but I can see important) problem setting.
The contributions are unlikely to have broad influence in other fields as, instead, it is applying (non-trivially) an existing algorithm / proof to their use-case.
Essential References Not Discussed: Nothing came to my mind.
Other Strengths And Weaknesses: Nothing comes to mind.
Other Comments Or Suggestions: It would have been nice to introduce / describe KL-UCB in more depth, at least define it mathematically.
Questions For Authors: Nothing comes to mind.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank Reviewer fk7j for providing valuable feedback! Here, we try to address the points mentioned in order:
> It would have been nice to see some non Mon-MDP algorithms performance, especially on MDPs that are less obviously designed with these Mon-MDP algorithms in mind. Similarly, some ablation studies on parameter robustness and different versions of the proposed method would have been insightful and helpful.
In Figure 9 in Appendix B.3 the first column shows the performance on traditional MDPs. Parisi et al. (2024a) showed the superior performance of of Directed-$E^2$ against conventional exploration strategies such as $\varepsilon$-greedy, optimistic initialization, $\varepsilon$-greedy with count bonuses, $\varepsilon$-greedy with UCB bonus and $\varepsilon$-greedy with [long-term UCB bonus](https://www.mdpi.com/1999-4893/15/3/81) [Parisi et al., 2022]. Our conclusion is that when our proposed algorithm, Monitored MBIE-EB, outperformed Directed-E$^2$, it simultaneously outperformed five other baselines as well. We will make this more explicit in the final version.
We have run ablation studies on the significance of pessimism and observe episodes in which the agent only seeks observability of the environment reward. The findings show that both are needed to achieve the results in the paper. We’ll add the ablation experiments to the appendix in the final version.
> It would have been nice to introduce / describe KL-UCB in more depth, at least define it mathematically.
We have given the explicit formula of KL-UCB in Appendix E, page 26, line 1383. We will add a footnote to the main body of the paper with the formula and reference to the appendix. | Summary: In this paper, authors develop a model-based interval estimation algorithm for Monitored MDPs. Authors prove typical desired properties about this algorithm, and compare its performance on a suite of 24 benchmark environments to the previous SOTA for Mon-MDPs called $E^2$. The results show notable improvements.
Claims And Evidence: Claims:
- The new model-based algorithm can fully exploit the problem structure, take advantage of a known monitor, have worst-case guarantees for unsolvable Mon-MDPs even without specific initialization, and have non-asymptotic proofs of convergence.
- Evidence: Theory work in Section 3 with proofs in the Appendix.
- MBIE-EB performs better than $E^2$ with random initialization:
- Evidence: Appendix B.4 states that with random initialization, essentially no learning occurs, but I think a Figure showing this and comparing the performance to MBIE-EB would be useful.
- The new algorithm outperforms Mon-MDP baselines such as $E^2$.
- Evidence: Figure 6 Plots.
Methods And Evaluation Criteria: Authors compare against a suite of 24 benchmark grid environments such as "River Swim". These environments seem appropriate as a first test of this method, and their method outperforms the baseline, $E^2$. I wonder if there are any other relevant baselines that should be included?
They suggest future work such as pseudocounts that would expand the applicability of the approach to non-enumerable state spaces. It seems reasonable for this to be left to future work rather than included in this paper.
Theoretical Claims: No.
Experimental Designs Or Analyses: Authors provide 95% confidence intervals, and hyperparameters are clearly stated in Appendix C.
Supplementary Material: I briefly looked through the appendix.
Relation To Broader Scientific Literature: This paper extends the classic MBIE algorithm to Mon-MDPs.
Essential References Not Discussed: Unknown
Other Strengths And Weaknesses: The main weakness of the paper, in my opinion, is that I am not sure how widely used Mon-MDPs are. Within domain, the algorithm suggested in the paper is certainly good, but without further knowledge of this literature, I am not sure what other related extended MDP formulations have similar results known.
Other Comments Or Suggestions: Figure 6: It looks to me like 4/24 of the environments have on par performance, not 2/24.
Questions For Authors: Not at this time.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank Reviewer VJG2 for providing valuable feedback! Here, we try to address the points mentioned in order:
> Appendix B.4 states that with random initialization, essentially no learning occurs, but I think a Figure showing this and comparing the performance to MBIE-EB would be useful.
Figure 4.b and Figure 4.c show the lack of Directed-$E^2$'s learning and comparison with Monitored MBIE-EB. We will make a reference to these results in that section of the Appendix.
> Figure 6: It looks to me like 4/24 of the environments have on par performance, not 2/24.
We agree with the reviewer that from Figure 6 alone, 4 of the environments look to have similar performance. Due to the scaling of the y-axis, the final performance on two of the plots are difficult to evaluate. You can find rescaled results in Figure 9 in Appendix B.3 where its clear 2/24 have on-par performance, which is the basis for the claim. We will update this figure. | null | null | null | null | null | null |
Morse: Dual-Sampling for Lossless Acceleration of Diffusion Models | Accept (poster) | Summary: This work proposes a method to speed up diffusion models by skipping steps (Dash) and by compensating errors for the Dash (Dot). The Dash model uses a pre-trained model with mere skipping steps (i.e., existing diffusion models such as Stable Diffusion, but using larger, less steps) while the Dot model must be trained to provide adaptive residual feedback. LoRA structure was used for the Dot model along with the weight sharing with the Dash model for efficient tuning. This work was tested with Stable Diffusion and LCM-SDXL, showing 1.78x to 3.31x speed ups with comparable performance.
Claims And Evidence: While there were some unclear aspects, this manuscript seems to provide partial evidence for their claims such as speed ups (1.78x - 3.31x), thus demonstrating the effectiveness of the proposed method. However, these claims were not well-compared with other recent works, so it is not straightforward to see if this proposed method is indeed a state-of-the-art. In fact, it is unclear if the proposed method is indeed "fast" as claimed over the entire manuscript since there are a number of missing comparisons and a number of new works on one-step diffusion models. Moreover, "Universally" in the title seems an over-claim since the proposed architecture of the Dot model was not tested universally (enough) for other recent diffusion models.
Methods And Evaluation Criteria: The methods make sense to me. However, Table 2 looks like somewhat unfair comparisons since the Stable Diffusion is the pre-trained model and this work is the tuning model. There are a number of recent works that do not even require tuning such as Ma et al. CVPR 2024 (cited in this manuscript) and Wang et al. PFDIff, ICLR 2025 (available online since last year), so this table seems to deliver a biased message without properly discussing with other related works.
Theoretical Claims: There is no theoretical claim in this manuscript as far as I know.
Experimental Designs Or Analyses: Experimental designs or analyses were good except for the lack of comparisons with other similar baselines such as DeepCache (or PFDiff). While they look different from the proposed method, they were also using the temporal redundancies of diffusion steps in different ways (and without tuning unlike the proposed method, which seems important), so it is important to clearly show the advantage of the proposed method over related prior arts using the same underlying principles.
Supplementary Material: No, I did not read the supplementary material since the main text was good enough to understand the main ideas and the main results of this work.
Relation To Broader Scientific Literature: Step skipping strategies while exploiting temporal redundancies in diffusion steps have been discussed and developed such as DeepCache or PFDiff, but this work unfortunately did not properly address and compare with them in my view. Moreover, there are a number of recent works on one step diffusion models, so the insight in this work may become no so useful anymore quickly as the major diffusion models vendors will adopt this one step approaches. Full discussion on these other works and potential limitation seem important to justify the key contributions of this work.
Here are some examples for step skipping methods in diffusion models:
- Wenyang Zhou et al., EMDM: Efficient Motion Diffusion Model for Fast and High-Quality Motion Generation, ECCV 2024.
- Hui Zhang et al., AdaDiff: Adaptive Step Selection for Fast Diffusion Models, AAAI 2025.
- Zhenyu Zhou et al., Fast ODE-based Sampling for Diffusion Models in Around 5 Steps, CVPR 2024.
It is especially important to see Zhou et al. CVPR 2024 since this work is about using large step sizes with the aid of corrections.
Essential References Not Discussed: As mentioned above, the following papers and other recent works should be properly discussed and compared:
- X Ma et al., Deepcache: Accelerating Diffusion Models for Free, CVPR 2024 (cited, but not discussed)
- G Wang et al., PFDiff: Training-free Acceleration of Diffusion Models Combining Past and Future Scores, ICLR 2025 (not cited)
Other Strengths And Weaknesses: While this work seems to propose a clear method for accelerating diffusion models, there are a number of weaknesses as below:
- Requiring additional training for speed up does not look attractive over other training-free methods (PFDiff, Deepcache) or one-step diffusion models. For a new model, this work requires an additional training (190 A100 hours may not be much as compared to the original model training, but can be cumbersome when a new model is given multiple times). It is also unclear how this work can work with personalization of diffusion models or concept erasing of diffusion models.
- As mentioned above, this work seems to demonstrate its effectiveness in Stable Diffusion Models only, but there are a number of other diffusion models that are available online. It seems important to demonstrate for some of them, especially to justify the Dot method.
Other Comments Or Suggestions: In Figure 1 caption, "Latency Consistency Models" should be "Latent Consistency Models"
Questions For Authors: In Figure 8, it was interesting to see that the major FID improvements were done around the cases with CLIP scores near 28-29, while the improvements were almost none for the lowest CLIP score cases (around 25) and the highest CLIP score cases (around 30). Any insight for them?
What was the unit of LSD? In Figures 4-5, there were large FID gaps around 3-5 LSDs for the baseline and the proposed method, but their runtime latencies were at the same point - in terms of absolute computational time scales, they should not be exactly matching due to additional computation of the Dot model, right? Any explanation for them? Actually, these results were also my major concerns since even 3-5 LSD with Morse did not achieve great results as compared to consistency models with around 4 steps (not compared in here) or more recent one-step models.
Will this work (and other recent works such as Deepcache) be useful for one-step diffusion models?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks a lot for your detailed comments and the recognition of our method.
**1. To your main concern** about comparisons of Morse with recent related methods, **our responses include 4 parts**:
**Part 1: Comparison with DeepCache and PFDiff.** In formulation, they and our Morse explore the temporal step redundancies for accelerating diffusion models in different ways: **1)** DeepCache (DC, for brevity) reuses the features at step t for N-1 following steps, and PFDiff utilizes the states of past steps stored in a buffer to update the states of future steps, and they are training-free; **2)** Morse is a simple dual-model design relying on step skipping and residual feedback with a small number of learnable parameters. In performance: **1)** Table A compares Morse with open-source DC on CIFAR-10 under the same settings. We can see: **a)** DC is lossy in output quality while Morse is lossless; **b)** Morse performs better in terms of both FID and Throughput; **2)** In Part A of our responses to Reviewer vMF4, we provide more results on Stable Diffusion (SD), further showing the superiority of Morse to DC; **3)** Table B compares Morse with PFDiff (code is public) on SD, showing Morse is better than PFDiff.
**Table A**: Results on CIFAR-10. **DC denotes DeepCache**
|Method|FID↓|Throughput↑(img/second on a 4090 GPU)|
|-|-|-|
|DDIM(100 steps)|4.3|14.4|
|+DC(N=2, N: the cache updating at every N steps)|4.4|19.1|
|+DC(N=3)|4.8|22.2|
|+DC(N=5)|5.7|23.7|
|+DC(N=10)|10.0|27.2|
|+Morse(LSD=100)|**4.1**|14.4|
|DDIM(50 steps)|4.8|29.1|
|+Morse(LSD=50)|4.4|28.8|
|DDIM(20 steps)|7.0|71.7|
|+Morse(LSD=20)|5.1|**73.9**|
**Table B**: Results on MS-COCO. Evaluated on 10k samples.
|Method\LSDs|5|10|15|20|
|-|-|-|-|-|
|SD|23.9|16.8|16.1|16.0|
|+PFDiff|18.3|13.1|13.6|14.0|
|+Morse|**13.9**|**11.7**|**12.2**|**13.6**|
**Part 2: Comparison with Step Skipping Methods.** Thanks for pointing out three works in this line: **1)** EMDM is a GAN-based diffusion learning method tailored for video-based human motion generation, and AdaDiff is an RL-based diffusion learning method tailored for text-to-image generation in a prompt-difficulty-aware manner. Our Morse addresses a variety of image generation tasks (see Sec.3), thus it differs with them in focus and formulation (see Part 1); **2)** AMED (Zhou et al., CVPR 2024) is an improved solver using a 2-MLP network for each steps-budget to learn the mean direction for fast sampling. Table C compares Morse with AMED on CIFAR-10, showing Morse is better; **3)** In Part B of our responses to Reviewer vMF4, we provide more experiments, showing Morse is also better than other step skipping methods.
**Table C**: Results on CIFAR-10
|Method\LSDs|3|5|7|9|
|-|-|-|-|-|
|DPM-Solver++|110.0|24.97|6.74|3.42|
|+AMED|25.95|7.68|4.51|3.03|
|+Morse|**14.36**|**5.73**|**3.31**|**2.98**|
**Part 3: Comparison with One-step Diffusion Models.** They aim to get the extreme diffusion speedup, yet lead to serious degeneration of output quality compared to full-step teachers. Methods based on step skipping (e.g., Morse & AMED) and past-step feature reuse (e.g., DeepCache & PFDiff) cannot accelerate one-step diffusion models that have no temporal redundancy, which is a common limitation. To avoid an over-claim, "universally" will be removed in our paper.
**Part 4:** Morse is a leading lossless method that needs to train only once given a pre-trained diffusion model. "190 A100 hours" is for a billion-level SD model (its pre-training cost is 1000x longer, see Table 2). For more comparisons, please check [this link](https://anonymous.4open.science/r/Morse-Reviewer-cEcd) and our responses to Reviewer vMF4/dKeM.
**2. To Q1** regarding Fig.8, the FID improvements are decent to the highest CLIP score cases, but are really small to the lowest cases, as illustrated in Table D below. This is due to the random sampling strategy of guidance scales (GS) from 2 to 10 when training the Dot model (see Line580-585). The lowest CLIP scores appear at GS=2 occupying a small portion of samples, making it hard to improve FIDs.
**Table D:** Results (FID|CLIP score) on MS-COCO with SD
|Method\LSDs|10|20|50|
|-|-|-|-|
|SD(GS=2)|18.8\|25.4|11.1\|26.5|9.0\|27.0|
|+Morse|18.7\|25.4|11.1\|26.5|9.0\|27.0|
|SD(GS=7.5)|11.8\|29.7|12.4\|30.1|13.5\|30.0|
|+Morse|9.3\|29.8|11.3\|30.1|13.3\|30.0|
**3. To Q2** regarding what is the unit of LSD, it is the Latency per Step of the baseline Diffusion model, i.e., the one-step time cost of the Dash model (see Line244-253). Recall that, in our Morse, Dot is N times faster than Dash, so a diffusion process with n-k Dash steps (skipping k steps totally) and Nk Dot steps has the same latency (n LSDs) to the baseline running n steps. This is why, given n LSDs, the runtime latencies of the diffusion model with and w/o Morse are at the same point in Fig.4-5. The runtime speed under different settings of n LSDs is shown in Table 5 of our paper.
**4. To Q3**, the answer is no, as clarified in Part 3. | Summary: This paper proposes a new faster sampling method for diffusion models. The goal of the paper is to provide a faster sampling method, not sacrificing performance (unlike many recent distillation-based methods). The main idea is to use another (the "dot") model in the sampling process. This additional sampler "corrects" the jump sampling of the main model, and is relatively quick and requires less training effort (downsizing a pretrained model and adding minimal extra layers, trained by LoRA). This enables more runs in the same time period, which greatly enhances the sampling efficiency. Accordingly, the focus boils down to reducing the entire time cost rather than minimizing the number of steps. Experiments show that the proposed method can enhance the performance of existing models and provide a two- to four-times speed up.
## update after rebuttal
Some reviewers pointed out that using an additional model can be a burden, and it might not be as favorable as other competing approaches. However, I still think that the two-denoiser approach is interesting and novel, and the additional results in the rebuttal sufficiently eliminate the doubts. I maintain my original score.
Claims And Evidence: Using another (lightweight) sampler to improve the sampling process in diffusion models is quite new and interesting. Recent one-step generators are somewhat limited in performance, but this method can be a viable alternative without sacrificing performance. The experimental results support this claim.
Methods And Evaluation Criteria: The design of the "dot" model is quite simple and efficient. The proposed method is evaluated on Stable Diffusion and LCM-SDXL, which is good enough in my opinion.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The paper provides extensive ablation (and hyperparameter tuning) experiments. The results are generally convincing.
However, one complaint I have is that there is no direct comparison to distillation-based methods. I do understand that this paper's focus is somewhat different from these, but as the paper criticizes their shortcomings, comparing the performance/speed to these will provide a better picture.
Supplementary Material: I briefly checked it (all parts).
Relation To Broader Scientific Literature: Diffusion models have become a significant part of generative models, so providing a faster method without sacrificing performance can benefit many related areas. Moreover, the main concept (using another auxiliary sampler to correct jump sampling) is quite novel and interesting.
Essential References Not Discussed: The bibliography is thorough enough.
Other Strengths And Weaknesses: Please see the above points.
Other Comments Or Suggestions: As mentioned above, comparing the performance/speed to the distillation-based method will benefit the readers.
Questions For Authors: - In (5), the "dot" model receives five inputs, three of which are images. This model is a modified version of a pretrained diffusion model, so how are these inputs combined? Are they concatenated, and then is the resulting channel reduced by the first (extra) layer? This is not really explained in the paper or the supplementary material.
- Moreover, the explanation about the structure of the dot model itself is somewhat confusing. It says that the pretrained model is fixed, two extra layers are added, and then LoRA is applied? This was somewhat confusing, and after seeing Fig. 3, I guessed that LoRA is applied to the pretrained part. Articulating the description would benefit the readers.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you so much for the constructive comments, and the recognition of our work including the proposed method, the experiments and the performance. Please see our below responses to your concerns and questions one by one.
**1 To your concern about** “However, one complaint I have is that there is no direct comparison to distillation-based methods…comparing the performance/speed to these will provide a better picture.”
**Our responses are:** Thanks for your valuable suggestion. We agree that a direct comparison of Morse with distillation-based methods would improve our work further.
In Section 3.4 of our paper, we have shown that Morse can be combined with consistency distillation, which achieves an average speedup 1.43× for Latent Consistency Model on MS-COCO (see Table 3).
Here, we further provide a direct comparison with **BK-SDM** [1], a state-of-the-art distillation method that constructs lightweight diffusion models through architectural compression and feature distillation. As shown in Table A, when comparing Stable Diffusion+Morse (10 LSDs) with BK-SDM-base|-small|-tiny models (25 steps), Morse achieves better FID scores (8.60 vs 13.35|14.17|15.60) while maintaining higher throughput (4.67 vs 2.79|2.84|2.92), demonstrating the superiority of Morse to BK-SDM.
Additionally, following the suggestions from Reviewer vMF4/cEcd, we provide further comparisons with other acceleration paradigms (e.g., feature reuse, sampling schedule optimization, quantization). For instance, our Morse outperforms DeepCache [2], a state-of-the-art feature reuse method, on MS-COCO with Stable Diffusion. The results are presented in Table B below. More results and discussions can be found in our responses to Reviewer vMF4/cEcd.
**Table A**: Comparison of Morse and BK-SDM on MS-COCO with DDIM solver. The results are measured with the same settings as in our paper. Throughput is measured on a single RTX 4090 GPU. BK-SDM-base, BK-SDM-small and BK-SDM-tiny are student networks with different architectures distilled by Stable Diffusion as the teacher network. LSD denotes the Latency per Step of the baseline Stable Diffusion.
|Method|FID↓|Throughput↑ (images/second)|
|-|:-:|:-:|
|BK-SDM-base (25 steps)|13.35|2.79|
|BK-SDM-small (25 steps)|14.17|2.84|
|BK-SDM-tiny (25 steps)|15.60|2.92|
|Stable Diffusion (25 LSDs) |8.41|1.95|
|+ Morse (25 LSDs)|**8.16**|1.97|
|Stable Diffusion (20 LSDs)|8.70|2.45|
|+ Morse (20 LSDs)|8.29|2.43|
|Stable Diffusion (10 LSDs)|10.65|4.62|
|+ Morse (10 LSDs)|8.60|**4.67**|
**Table B**: Results of Morse and DeepCache on MS-COCO with Stable Diffusion v1.4 and DDIM solver. N denotes the cached feature maps of DeepCache is updated at every N steps.
|Method|FID↓|Throughput↑ (images/second)|
|-|:-:|:-:|
|Baseline (50 steps)|8.22|1.07|
|Baseline (20 steps)|8.70|2.45|
|Baseline (10 steps)|10.65|4.62|
|DeepCache (50 steps, N=2)|9.31|1.62|
|DeepCache (50 steps, N=3)|9.55|1.94|
|DeepCache (50 steps, N=5)|10.62|2.33|
|Morse (50 LSDs)|**8.15**|1.12|
|Morse (20 LSDs)|8.29|2.43|
|Morse (10 LSDs)|8.60|**4.67**|
[1] Kim, Bo-Kyeong, et al. "Bk-sdm: A lightweight, fast, and cheap version of stable diffusion." ECCV, 2024.
[2] Ma, Xinyin, Gongfan Fang, and Xinchao Wang. "Deepcache: Accelerating diffusion models for free." In CVPR, 2024.
**2. To your question about** “In (5), the "dot" model receives five inputs, three of which are images… Are they concatenated, and then is the resulting channel reduced by the first (extra) layer?…”
**Our responses are:** As shown in Figure 2 of our paper, the Dot model receives five inputs, including three image features ($x_{t_{i}}$, $x_{t_{s}}$ and $z_{t_{s}}$) and two scalars ($t_{i}$ and $t_{s}$). As you noted, the three image features are concatenated before being fed into the first extra layer. The two scalars ($t_{s}$ and $t_{i}$), corresponding to the time steps of Dash and Dot respectively, are individually embedded and then concatenated to form the time embedding vectors. Thanks for pointing this out, we will add more descriptive details to clarify it in the revised manuscript.
**3. To your question about** “Moreover, the explanation about the structure of the dot model itself is somewhat confusing. It says that the pretrained model is fixed, two extra layers are added, and then LoRA is applied? This was somewhat confusing, and after seeing Fig. 3, I guessed that LoRA is applied to the pretrained part. Articulating the description would benefit the readers”.
**Our responses are:** Thanks a lot for your suggestion. Yes, for the Dot model, LoRA is only applied to the pre-trained layers shared from the Dash model (fixed). The newly added two layers and LoRA are trained jointly. We will revise the corresponding description about the structure to improve its clarity. | Summary: The paper introduces Morse, a framework for accelerating diffusion model sampling by training a lightweight model (Dot) to emulate a slower pretrained diffusion model (Dash). Dot leverages additional inputs, such as sampling history from earlier timesteps, to improve efficiency. Its architecture mirrors the original diffusion model but incorporates LoRA and additional downsampling/upsampling layers at the beginning and end. Only the newly added layers are fine-tuned, while the original layers operate at a reduced resolution, significantly lowering latency. By combining fast Dot models with slower Dash models during inference, Morse achieves a better quality-speed tradeoff, with 1.5–3.5× speedups depending on the setting. The authors evaluate Morse across various image generation benchmarks, demonstrating considerable improvements in efficiency without little degradation to output quality.
Claims And Evidence: Yes, the claims made in the paper are well substantiated. And sufficient ablation studies are performed that showcase the impact of each design decision.
Methods And Evaluation Criteria: The method is very simple and easy to understand. The benchmarking suite includes a good amount of baseline models and datasets, and the evaluation criteria make sense.
Theoretical Claims: No major theoretical claims were made. This papers contributions is primarily an efficiency technique applicable to diffusion models.
Experimental Designs Or Analyses: The experiments are generally sound. However, since this paper proposes an efficiency technique for accelerating diffusion inference, it would have been preferable to compare against other established acceleration methods, such as feature reuse, timestep schedule optimization, and quantization. Some of these approaches are training-free and do not require an additional model, making a comparison particularly relevant. A discussion of how the proposed method compares in terms of tradeoffs—such as speed, memory efficiency, and ease of adoption—would strengthen the evaluation.
Supplementary Material: Yes, I did a quick pass through the entire appendix but have not examined it in detail.
Relation To Broader Scientific Literature: The slow sampling speed of diffusion- and flow-based generative models is a well-known limitation, and a significant amount of research has focused on addressing this issue.
For few-step sampling (1–4 steps), distillation-based approaches are essential for achieving acceptable quality. However, these methods require training an additional student model, which comes with substantial computational overhead.
For slightly higher NFE regimes (10–30 steps), a variety of techniques have been proposed, including improved ODE/SDE solvers, optimized sampling schedules, feature reuse across timesteps and model layers, model quantization, etc. The approach in this paper falls within this setting—while it improves sampling in the few-step regime, the sample quality still degrades significantly at very low NFE.
Among these methods, training-free approaches tend to be more impactful due to their ease of adoption, as they do not require additional model training. Comparing this method against such alternatives would help clarify its tradeoffs in terms of efficiency and practical usability.
Essential References Not Discussed: Among the relevant prior works related to accelerating diffusion model sampling, the line of work regarding optimizing timestep schedules seems to be missing, e.g. [1, 2, 3, 4].
[1] Watson, Daniel, et al. "Learning to efficiently sample from diffusion probabilistic models." arXiv preprint arXiv:2106.03802 (2021).
[2] Watson, Daniel, et al. "Learning fast samplers for diffusion models by differentiating through sample quality." International Conference on Learning Representations. 2021.
[3] Xue, Shuchen, et al. "Accelerating diffusion sampling with optimized time steps." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
[4] Sabour, Amirmojtaba, Sanja Fidler, and Karsten Kreis. "Align your steps: Optimizing sampling schedules in diffusion models." arXiv preprint arXiv:2404.14507 (2024).
Other Strengths And Weaknesses: **Strengths**:
* The paper is well written and nicely structured.
* The proposed method is simple and easy to understand.
**Weaknesses**:
* No additional efficient diffusion model baselines were compared against. Adding comparisons to prominent work in the field, such as DeepCache, and discussing the pros/cons of such approaches compared to Morse would be helpful. (Addressed in the rebuttal)
Other Comments Or Suggestions: No comments.
Questions For Authors: .
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for recognizing our work and constructive comments.
**1. To your main concern** about the comparison of Morse with existing methods for accelerating diffusion inference, **our responses include 3 parts**:
**Part 1: Comparison with Feature Reuse.** Following the suggestion by you and Reviewer cEcd, we compare Morse with DeepCache (its code is public): **1)** Table A shows their results on Stable Diffusion tested with the same settings as in our paper. We can see **a)** for acceleration, DeepCache is lossy in generation quality due to reusing most of the features at step $t$ for N-1 following denoising steps, yet Morse is lossless; **b)** Morse consistently gets better throughput (e.g., 4.67 vs 2.33 images/second) and FID (8.60 vs 10.62) than DeepCache; **2)** At [this linker](https://anonymous.4open.science/r/Morse-Reviewer-vMF4), we provide more results to show the superiority of Morse.
**Table A**: Results on MS-COCO(512×512) with Stable Diffusion (SD) v1.4 and DDIM solver
|Method|FID↓|Throughput↑(images/second on a 4090 GPU)|
|-|-|-|
|SD (50 steps)|8.22|1.07|
|+DeepCache (N=2, N: the cache updating at every N steps)|9.31|1.62|
|+DeepCache (N=3)|9.55|1.94|
|+DeepCache (N=5)|10.62|2.33|
|+Morse (LSD=50, LSD: the Latency per Step of the baseline SD)|**8.15**|1.12|
|SD (20 steps)|8.70|2.45|
|+Morse (LSD=20)|8.29|2.43|
|SD (10 steps)|10.65|4.62|
|+Morse (LSD=10)|8.60|**4.67**|
**Part 2: Comparison with Timestep Schedule Optimization.** Thanks for pointing out [1-4] that are closely related to our work. Generally, [2] presents a fast solver GGDM (a contemporary work is DPM-Solver, 1 of 5 samplers tested in our paper), and [1,3,4] design different strategies to choose optimal time steps when giving a small number of sampling steps (e.g., 10) and a solver, and only the code of [3] is public. As [1] uses a log likelihoods metric instead of popular metrics like FID for evaluation, comparing Morse with it is not applicable. For fair comparisons with [2-4], we collect their results and ours under the same settings: **1)** Table B&C compare Morse with [2]. We can see, under the same steps-budget, **a)** [2] is mostly worse than DPM-Solver, but Morse can losslessly accelerate DPM-Solver; **b)** [2] is better than DDIM, but DDIM+Morse outperforms [2]; **2)** Table D&E compare Morse with [3-4]. We can see, when using the same solver and steps-budget, **a)** Morse is superior to [3-4]; **b)** as jump sampling is our basic component, Morse can readily accelerate this line of works (as validated on [3] in Table D).
**Table B**: Results on CIFAR-10(32x32)
|Method\LSDs|5|10|15|20|25|
|-|-|-|-|-|-|
|[2]|13.77|8.23|6.12|4.72|4.25|
|DPM-Solver|268.14|6.26|4.13|3.66|3.50|
|DPM-Solver+Morse|**13.39**|**3.93**|**3.45**|**3.45**|**3.44**|
**Table C**: Results on ImageNet(64x64)
|Method\LSDs|5|10|15|20|25|
|-|-|-|-|-|-|
|[2]|55.1|37.3|24.7|20.7|18.4|
|DDIM|147.4|39.4|28.7|22.2|20.0|
|DDIM+Morse|**46.6**|**25.8**|**21.5**|**19.2**|**18.1**|
**Table D**: Results on CIFAR-10(32x32)
|Method\LSDs|5|10|12|15|
|-|-|-|-|-|
|DPM-Solver++(SDE)|29.22|4.03|3.45|3.17|
|+[3]|12.91|3.51|3.24|3.15|
|+Morse|5.73|2.91|2.80|2.75|
|+[3] and Morse|**4.77**|**2.89**|**2.79**|**2.74**|
|DPM-Solver(SDE)|191.46|4.72|3.83|3.77|
|+Morse|8.65|3.41|3.22|3.06|
**Table E**: Results on ImageNet(64x64)
|Method\LSDs|5|10|15|20|25|
|-|-|-|-|-|-|
|DDIM|145.0|42.5|30.3|26.6|24.8|
|+[4]|50.4|29.2|24.2|22.3|21.4|
|+Morse|**46.6**|**25.8**|**21.5**|**19.2**|**18.1**|
**Part 3: Comparison with Quantization.** In Table F, we compare Morse with Q-Diffusion (a leading open-source diffusion quantization method). We can see: **a)** Q-Diffusion is lossy due to the low-precision representation, but our Morse is lossless; **b)** Morse can also accelerate the quantized diffusion model, showing an average speedup of 1.9×.
**Table F**: Results on CIFAR-10 with DDIM
|Method\LSDs|5|10|20|50|100|
|-|-|-|-|-|-|
|Baseline|41.9|13.7|7.0|4.8|4.3|
|+Morse|**13.6**|**6.6**|**5.1**|**4.4**|**4.1**|
|Q-Diffusion (4-bit weight, 8-bit activation)|43.7|14.2|7.6|5.6|5.1|
|+Morse|14.6|8.0|6.5|5.3|4.8|
**2. To your concern** about a discussion of tradeoffs in terms of method efficiency and usability,
W.r.t. the above experiments, it can be concluded that: **1)** Morse is lossless thanks to its simple dual-model design with a small number of learnable parameters. Training-free methods such as feature reuse and quantization are lossy, but Morse can improve them; **2)** As jump sampling (JS) is our basic component (note Morse cannot speedup 1-step model as JS is not applicable), Morse can accelerate timestep schedule optimization methods[1-4], similar to the solvers/distillation-based methods tested in our paper; **3)** So, in practice, combing Morse with them can provide a promising way to attain more aggressive speedup ratios under bearable degeneration of image quality.
**3. Regarding more experiments and discussions**, please see our responses to Reviewer cEcd/dKeM.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I appreciate the additional comparisons between Morse and prior acceleration methods, and the results are quite promising. As my main concern has been sufficiently addressed, I will update my score.
---
Reply to Comment 1.1.1:
Comment: We are glad to see that you are satisfied with our rebuttal and have increased your score. We will add these additional experiments and discussions into our final paper, improving the paper quality.
Thanks again for your constructive comments, time and patience. | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.